#juju-dev 2012-04-30
<TheMue> morning
<fwereade> heya TheMue
<TheMue> fwereade: just went outside on our veranda to work there. the nice side of home office.
<rogpeppe> fwereade, TheMue: mornin'
<TheMue> rogpeppe: heya
<fwereade> heya rogpeppe
<fwereade> TheMue, sounds lovely :)
<fwereade> TheMue, I find my balcony is just too damn hot/bright at the times I'd generally want to work out there, but hey ho
<rogpeppe> ah, sun, i have vague memories of that.
 * rogpeppe looks out at yet another grey wet day
<TheMue> Hope that the weather next week is fine.
<rogpeppe> me too
<rogpeppe> fwereade: niemeyer is suggesting that UploadTools becomes part of PutFile's functionality. i'm not quite sure how that'll work though, as there are two possible places to put and get things.
<rogpeppe> fwereade: i'm wondering what you thing
<rogpeppe> think
<fwereade> rogpeppe, sorry, which are the places? I saw that suggestion, looked sane at first glance
<rogpeppe> fwereade: if you're a user, you get the tools from a public place, but you get bootstrap info from a private place
<fwereade> rogpeppe, ah, I'd had a vague assumption that the "local" version of the tools would always be stored in filestorage, and that would be the canonical location for that particular deployment, regardless of original source
<fwereade> rogpeppe, like charms (right?)
<rogpeppe> fwereade: i don't think every user should be charged with 5MB+ of uploads every bootstrap
<rogpeppe> fwereade: it slows it down apart from anything else.
<fwereade> rogpeppe, well, it wouldn't *necessarily* have to go through the user's machine (although not doing so would be somewhat fiddlier)
<rogpeppe> fwereade: it would using that API
<rogpeppe> fwereade: and in fact i'm not sure it's possible to initiate S3->S3 transfers without involving either a new machine in the cloud or shuttling the data through the local machine
<rogpeppe> but there might be i guess
<rogpeppe> interesting: http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTObjectCOPY.html
<fwereade> rogpeppe, I'd been thinking that in the local case, we upload somewhere and somehow communicate "there" to the bootstrap machine; which would always copy the juju files to the canonical location, regardless of source (but yeah, that could maybe shortcut it)
<fwereade> rogpeppe, (it actually feels like things could be a bit annoying if we upload to the canonical location directly from the client)
<rogpeppe> fwereade: by "the canonical location" are you thinking of a file path or a URL?
<fwereade> rogpeppe, URL
<rogpeppe> fwereade: so in the local case you're suggesting we store the binaries at two URLs?
<rogpeppe> fwereade: one for the initial upload and one "canonical" place?
<fwereade> rogpeppe, I'm not sure it's *actually* necessary to do so, and I think it's an implementation detail
<rogpeppe> fwereade: i'm not sure i see the advantage in doing so
<fwereade> rogpeppe, I'm really just saying that I don't think a deployment-local copy of the source code necessarily involves copying publicly available stuff through the user's machine
<rogpeppe> fwereade: we've got to copy things *from* the user's machine, right?
<fwereade> rogpeppe, only in dev mode, surely?
<rogpeppe> fwereade: yeah, sorry i thought that's what you meant by "deployment-local'
<rogpeppe> "
<fwereade> rogpeppe, ha, sorry
<rogpeppe> fwereade: why would non-devs need a deployment-local copy of the source code?
<fwereade> rogpeppe, I'm just having a quick look at the python to remind myself of something
<rogpeppe> or the binaries, come to that?
<fwereade> rogpeppe, I'm not privy to the details of the original discussions; my only non-waffly/hedged argument is "because we decided to do that for charms", and I'm not convinced the two cases are different enough to warrant inconsistency ;)
<fwereade> rogpeppe, it doesn't feel to me like an intrinsically bad thing for a deployment to pull in all its dependencies as they become apparent, though
<fwereade> rogpeppe, it should I think just make things a little less dependent on the vagaries of the wider internets ;)
<fwereade> brb
<rogpeppe> fwereade: yes, i agree with that. and it seems like the storage cost of storing the tools once per deployment is small.
<rogpeppe> fwereade: however...
<rogpeppe> fwereade: what about multiple architectures/OS versions etc?
<fwereade> rogpeppe, sorry, not sure: what about them? won't we potentially need copies for each, whatever we do, independent of anything else?
<rogpeppe> fwereade: so you're suggesting that every time we add a unit, we potentially copy toos to the local provider storage
<rogpeppe> ?
<rogpeppe> s/toos/tools/
<fwereade> rogpeppe, I think so; just like we do on deploy, right?
<rogpeppe> fwereade: this wasn't the plan i'd understood. but maybe it's a good thing to do.
<fwereade> rogpeppe, hm, I don't think it's necessarily intrinsically incompatible with what we'd discussed
<rogpeppe> fwereade: i'd thought that upload-client was an explicit flag that we would give
<fwereade> rogpeppe, I think we have a bit of freedom to do dev/real slightly differently
<rogpeppe> fwereade: how would dev and real be different?
<fwereade> rogpeppe, I'd seen the upload-client flag as an explicit I'm-in-dev-mode thing
<rogpeppe> fwereade: me too. but i think you're suggesting that real deployments do something similar?
<fwereade> rogpeppe, I'm suggesting that deployments should internally take responsibility for distributing juju binaries just as they do with charms
<fwereade> rogpeppe, and that the dev mode stuff simply give devs an opportunity to hook into that mechanism at a useful point
<fwereade> rogpeppe, it may ofc be that the charm-distribution strategy is a historical accident and I'm entirely on crack
<rogpeppe> fwereade: how do we deal with different charm versions for different platforms?
<fwereade> rogpeppe, I don't think we do
<rogpeppe> fwereade: 'cos that's something we need to deal with for juju tools
<fwereade> rogpeppe, I think that's up to the charm authors
<rogpeppe> fwereade: i thought so
<fwereade> rogpeppe, and yes, I see that they're not 100% the same situation
<fwereade> rogpeppe, (well, in a sense we do, in that we have separate oneiric and precise versions of some charms, and that means they carry a different identifier)
<rogpeppe> fwereade: but not, say, 386 vs amd64 vs arm?
<fwereade> rogpeppe, there's nothing stopping me running wordpress on precise and oneiric in the same deployment, although I'm not sure why I'd want to
<fwereade> rogpeppe, no; I think the core idea of variation is still in there
<rogpeppe> fwereade: so... if we go this direction, how does the initial bootstrap happen?
<rogpeppe> fwereade: bootstrap init pulls from a URL and pushes to local provider storage?
<fwereade> rogpeppe, I think so, yes
<rogpeppe> fwereade: maybe that only needs to happen when the first unit is deployed.
<fwereade> rogpeppe, yeah, I think so; it feels sensible to me to keep using the same versions everywhere until we're told to upgrade
<fwereade> rogpeppe, (but then I also think it makes sense to be lazy about grabbing (say) the 2.4.2 ARM binaries until we actually dpeloy to ARM
<fwereade> )
<rogpeppe> fwereade: yeah. that's where i was coming from.
<fwereade> rogpeppe, I'm just not quite sure where that fits in with dev mode
<rogpeppe> fwereade: indeed. i'm all a bit confused about how things would work exactly now.
<fwereade> rogpeppe, but a build-absolutely-everything script, and a dev-mode bootstrap that pushes everything, sounds, well, plausible at least
<fwereade> rogpeppe, I'm definitely drawing heavy inspiration from how charm publishing works atm
<fwereade> rogpeppe, er, in python, not in the store
<fwereade> rogpeppe, that's something that appears to "just work" pretty well
<fwereade> rogpeppe, but I am suddenly starting to worry that there are enough subtle differences that I've led myself astray
<rogpeppe> fwereade: so... is this about right: when you want to deploy a service, an agent (the machine agent?) pulls the charm from a URL and pushes to the local provider storage. then when a unit starts, the unit agent pulls the charm from local provider storage and runs it?
<fwereade> rogpeppe, unless I've really missed something, the charms even from the store round-trip through your local machine
<rogpeppe> fwereade: i don't know. the main significant difference AFAICS is that there are no bootstrapping issues with charms.
<fwereade> rogpeppe, I consider this to be sucky *but* an implementation detail
<rogpeppe> fwereade: really?
<fwereade> rogpeppe, pretty sure
<rogpeppe> fwereade: i'd prefer that not to be true for the tools. they're 10MB already and will be larger.
<fwereade> rogpeppe, "charm = yield repo.find(charm_url)"
<fwereade> rogpeppe, hence my talking about it being an implementation detail
<fwereade> rogpeppe, hm wait a mo
<fwereade> rogpeppe, no, I'm right, repo.find downloads
<fwereade> rogpeppe, I'd prefer also that it not be true for the charms
<rogpeppe> fwereade: agreed.
<fwereade> rogpeppe, but I think that in neither case is it actually *necessary* that the publicly available stuff be downloaded
<fwereade> rogpeppe, I'm suggesting the philosophy is worth copying not the implementation
<rogpeppe> fwereade: i think i agree. i'm just trying to work out what it means.
<rogpeppe> fwereade: how's this for a sketch: http://paste.ubuntu.com/956991/
<rogpeppe> fwereade: actually, one slight mod: http://paste.ubuntu.com/956997/
<fwereade> rogpeppe, in essence yes; I think there are maybe some arch-related subtleties around the "version" there, but I think-hope they're impishdetails rather than fully devilish ones
<rogpeppe> fwereade: what kind of things are you thinking of there?
<fwereade> rogpeppe, I *think* it'll only be an issue in dev mode
<fwereade> rogpeppe, and I think as a dev I can put up with building everything I plan to need for my env before bootstrapping and just splurging them all up there at bootstrap time
<rogpeppe> fwereade: yeah. i think you have to.
<fwereade> rogpeppe, with that assumption in place it feels like it'd work
<rogpeppe> fwereade: and you *can't* deploy to a version with a different arch if you want to use your local version
<rogpeppe> s/deploy to a version/deploy to a machine/
<fwereade> rogpeppe, hmm, can't we cross-compile?
<rogpeppe> fwereade: not if we want to use DNS.
<rogpeppe> (DNS uses cgo, which means that we need the dynamic libraries available when we compile)
<fwereade> rogpeppe, ty
<fwereade> rogpeppe, but this means we are actually not able to cross-compile at all?
<rogpeppe> (i might be wrong about some of the details there)
<rogpeppe> fwereade: pretty much, yeah.
<fwereade> rogpeppe, I'm comfortable avoiding a deep-dive on the subject, that sounds like the critical issue
<rogpeppe> fwereade: well... no, you can, and it's dead easy
<fwereade> rogpeppe, ah ok...
<rogpeppe> fwereade: but you don't get certain packages
<fwereade> rogpeppe, ah, got it; so DNS really did mean DNS :)
<rogpeppe> fwereade: the go compiler is always a cross compiler too
<fwereade> rogpeppe, so I had thought :)
<rogpeppe> fwereade: yeah.
<rogpeppe> fwereade: the problem is that on some platforms you just can't do everything by invoking syscalls
<rogpeppe> fwereade: there's magic in the local libraries too, so you need to call them.
<fwereade> rogpeppe, that said I think we touchedon it in the thread and niemeyer agreed that no-foreign-dev-builds was an acceptable tradeoff for now anyway
<fwereade> rogpeppe, ha, ok
<fwereade> rogpeppe, TheMue: https://codereview.appspot.com/6129053
<fwereade> ...and lunch
<TheMue> fwereade: enjoy
<rogpeppe> fwereade: reviewed
<fwereade> rogpeppe, tyvm
<rogpeppe> fwereade: i've updated the uploadtools proposal in the light of gustavo's comments, which i think are compatible with the result of our discussion earlier. https://codereview.appspot.com/6128046/
<fwereade> rog, cheers
<rogpeppe> (it's a pity that codereview/lbox doesn't deal with file moves well)
 * rogpeppe just deleted >70 lines of slightly dubious code. oh, the pleasure!
<rogpeppe> make that >100
<niemeyer> Hello!
<niemeyer> fwereade, rogpeppe, TheMue: Anyone around still?
<rogpeppe> niemeyer: yo!
<rogpeppe> niemeyer: i think we all are, probably
<rogpeppe> niemeyer: we could even have our start-the-week meeting maybe
<rogpeppe> niemeyer: how's things in californeye-ay?
<TheMue> niemeyer: Heya.
<niemeyer> rogpeppe: Probably not today, at least.. we're about to have the kick off talk by Mark
<rogpeppe> niemeyer: k
<TheMue> rogpeppe: BBQ is already hot here.
<niemeyer> rogpeppe: I could wake up earlier tomorrow so we can do it, though
<niemeyer> TheMue: Nice :)
<rogpeppe> TheMue: sounds like you've got nice weather. jealous!
<rogpeppe> niemeyer: that sounds good
<TheMue> niemeyer: Yes, indeed, traditional party starting into May.
<TheMue> rogpeppe: 20Â°C, sun is shining.
<rogpeppe> niemeyer: i wonder if i could run something past you briefly that i discussed with fwereade this morning
<niemeyer> rogpeppe: Sure, what's up?
<rogpeppe> niemeyer: it's to do with how we do the bootstrap stuff
<rogpeppe> niemeyer: here's a sketch of how i *think* it might work
<rogpeppe> niemeyer: http://paste.ubuntu.com/957728/
<rogpeppe> niemeyer: the difficulty being that unlike Put/GetFile, upload tools has two places to go to
 * niemeyer reading through
<rogpeppe> niemeyer: there should be a blank line before "on bootstrap machine init" BTW
<niemeyer> rogpeppe:It all sounds good until the provisioning agent bits
<rogpeppe> niemeyer: go on
<niemeyer> rogpeppe: I'm wondering why it has to recompute
<niemeyer> rogpeppe: I guess it makes sense, since there are different circumstances depending on what's being run
<rogpeppe> niemeyer: exactly
<fwereade> niemeyer, heyhey; sorry, landlady cam round early
<niemeyer> fwereade: Heya!
<rogpeppe> niemeyer: different versions might be appropriate for different os/arch combinations
<niemeyer> rogpeppe: That scheme looks reasonable
<rogpeppe> niemeyer: the only really new bit is that the binaries are usually copied to private storage
<rogpeppe> niemeyer: that doesn't happen with the bootstrap machine, unless you force it with --upload-client
<niemeyer> rogpeppe: Hmm
<niemeyer> rogpeppe: private doesn't have to be private.. that might make things simpler
<rogpeppe> niemeyer: fwereade's suggestion, based around what we do with charms
<rogpeppe> niemeyer: how do you mean?
<niemeyer> rogpeppe: Private might be a bit misleading, in the sense that this is the environment storage to which the client has write access
<niemeyer> rogpeppe: There's no intrinsic reason why *those* files, specifically, have to be private
<rogpeppe> niemeyer: it's also the same storage name space that's used for storing the environment zookeeper addresses
<niemeyer> rogpeppe: Hmmm.. on S3, though, we can do even better
<niemeyer> rogpeppe: We can make $u a signed URL, that has read access from the bucket even though it is private
<rogpeppe> niemeyer: i'm not sure i see how that helps
<niemeyer> rogpeppe: Hold on, sorry.. why is $u being copied?
<niemeyer> rogpeppe: i've missed that in the scheme
<rogpeppe> niemeyer: the idea is that an environent gets a stable, local copy of the juju tools, even if the originals have come from a remote, potentially unstable source.
<niemeyer> rogpeppe: We shouldn't worry about that now IMO.. $PUBLIC is going to be configurable
<niemeyer> rogpeppe: and right now both private and public are equally trustable, being in S3
<rogpeppe> niemeyer: the difficulty i'm having is that we've subsumed UploadTools into PutFile, but there are now *two* name spaces not one
<rogpeppe> niemeyer: we've got the place that the tools are stored and the place that the environment settings (zookeeper addresses currently) are stored
<niemeyer> rogpeppe: We've always had two.. that's not related to UploadTools vs. PutFile
<rogpeppe> niemeyer: we didn't use PutFile for the other storage space though, right?
<niemeyer> rogpeppe: Which other storage space?
<rogpeppe> niemeyer: the storage space for the executable files
<niemeyer> rogpeppe: Sorry, let's get a bit more specific
<niemeyer> rogpeppe: PutFile should be used by the client to upload the tools to the environment storage
<niemeyer> rogpeppe: PutFile is not used to upload to the public storage because it's not maintained by the juju client
<niemeyer> rogpeppe: Does that make sense?
<niemeyer> rogpeppe: If it does, I'm not sure about what you mean
<rogpeppe> niemeyer: yes it does. and i think my confusion has gone. if i delete the lines that copy $u, it all looks fine.
<niemeyer> rogpeppe: Sweet
<rogpeppe> niemeyer: one other reservation was that the prefix "tools/" becomes rather special, but i think that's fine - i've documented it on Environ.PutFile: http://paste.ubuntu.com/957764/
<niemeyer> rogpeppe: Agreed
<rogpeppe> niemeyer: and that makes it reasonable for an Environ implementation to know about files of that form when searching for tools
<niemeyer> rogpeppe: We'll have another special prefix too: charms/
<rogpeppe> niemeyer: cool.
<niemeyer> rogpeppe: That's how it was originally designed, but the ancient "nova object store" couldn't deal with slashes (!)
<niemeyer> rogpeppe: We don't care about that anymore, so can do the right thing
<rogpeppe> niemeyer: great.
<rogpeppe> niemeyer: i've just been deleting the "juju-origin" code BTW and it feels marvellous
<niemeyer> rogpeppe: +1000 :)
<rogpeppe> niemeyer: >100 lines of code gone, just like that (and that's not even including the tests!)
 * fwereade approves :)
<niemeyer> rogpeppe: re. this:
<niemeyer> """
<niemeyer> rogpeppe: Well, I did suggest
<niemeyer> Arm
<niemeyer> Thanks copy & paste
<niemeyer> i don't think so. "t" is for "test" here - i'm not sure that there's a
<niemeyer> single set of arguments that will result in characteristic for all
<niemeyer> commands, so we use the test table above.
<niemeyer> """
<niemeyer> rogpeppe: The only difference in that table is the "juju" vs "jujud" string
<niemeyer> rogpeppe: It's really not worth a table in that case..
<rogpeppe> niemeyer: i guess, if we can force a similar-looking usage message out of all juju commands by passing them the same arguments
<rogpeppe> niemeyer: it's probably true of jujuc too, and perhaps that's all we'll ever have
<rogpeppe> niemeyer: ok, will remove the table
<niemeyer> rogpeppe: That's my guess, and we can always change back when it does make sense to have it
<niemeyer> rogpeppe: Mostly a "doesn't look like it's necessary", rather than "we can't do this"
<rogpeppe> niemeyer: k
<TheMue> So, just came back to wish you a happy labour day tomorrow and you a good time in IAK, niemeyer.
<TheMue> niemeyer: I've done the watcher test refactoring, it's proposed.
<niemeyer> TheMue: Thanks a lot, have a good time too!
<niemeyer> TheMue: Cheers.. I'll do a review pass tonight
<TheMue> niemeyer: Thank you, will have. And looking forward to next week. But we'll meet here on Wednesday again.
<TheMue> Bye
<rogpeppe> niemeyer: i've made that change BTW
<niemeyer> rogpeppe: Thanks!
<niemeyer> rogpeppe: Looking, tentatively :)
<niemeyer> Have to move
<niemeyer> biab
<rogpeppe> fwereade: ping
<fwereade> rogpeppe, pong
<rogpeppe> fwereade: here's a thought: rather than having a --upload-client flag, we have an upload-client command
<fwereade> rogpeppe, hmm, that sounds like a good idea to me
<fwereade> rogpeppe, although it makes me less certain that that's precisely the right name
<rogpeppe> fwereade: 'cos i was just looking at looking as it as a flag and it's not really appropriate on quite a few commands, but seems like it might work well as a command in its own right
<rogpeppe> fwereade: upload-tools? upload?
<fwereade> rogpeppe, upload-tools sounds pretty good
<rogpeppe> fwereade: the idea is you can execute it *before* bootstrap if you want
<fwereade> rogpeppe, yes, very nice
<rogpeppe> fwereade: good, thanks.
<rogpeppe> fwereade: ok, i'll give that a go
<rogpeppe> fwereade: unless you wanna
<fwereade> rogpeppe, I'm very happily fiddling with the hook commands atm
<rogpeppe> fwereade: ok great
<rogpeppe> right, i'm off for the evening. see y'all tomorrow
<fwereade> likewise
<fwereade> take care all
#juju-dev 2012-05-01
<rogpeppe> niemeyer: morning!
<niemeyer> rogpeppe: Heya!
<rogpeppe> niemeyer: thanks for the LGTM, BTW
<rogpeppe> niemeyer: will submit after i've upgraded to precise this morning. thought it might be a good idea before heading to UDS. i'm hoping the battery life will have reverted towards what i got under natty...
<niemeyer> rogpeppe: Ah, super
<niemeyer> rogpeppe: Just finishing reviews to get some sleep here as well
<rogpeppe> niemeyer: hmm, precise says it's going to remove libzookeeper-dev, i wonder why
<rogpeppe> niemeyer: oh yes, one thing i discussed with william last night
<rogpeppe> niemeyer: how do you feel about upload-tools being a separate subcommand rather than a flag?
<niemeyer> rogpeppe: Where would it upload to?
<rogpeppe> niemeyer: to private storage
<niemeyer> rogpeppe: Which private storage?
<niemeyer> rogpeppe: There's no environment..
<rogpeppe> niemeyer: the storage mentioned in environments.yaml
<niemeyer> rogpeppe: The environment doesn't exist
<rogpeppe> niemeyer: it is created when necessary. it doesn't matter if it's created by bootstrap or upload-tools, i think
<niemeyer> rogpeppe: bootstrap is what creates it.. we can't expect the private storage to work before that happens.
<niemeyer> rogpeppe: Heh
<niemeyer> rogpeppe: So you're turning upload-tools into bootstrap..
<niemeyer> rogpeppe: Doesn't feel great.
<rogpeppe> niemeyer: bootstrap is all about starting the first machine, i think. the storage is somewhat incidental.
<niemeyer> rogpeppe: bootstrap is all about creating the environment. Everything else is a side effect of that.
<niemeyer> rogpeppe: Storage can't be assumed to work at all before the environment has been bootstrapped.
<rogpeppe> niemeyer: ok, so if it's not a command, which commands should get --upload-tools (or --upload-client?) as a flag?
<niemeyer> rogpeppe: There are environments that already depend on that (local) and it's pretty much certain that private storage will move out of S3 in the future and into a custom service
<niemeyer> rogpeppe: Right now, bootstrap
<rogpeppe> niemeyer: and what if you want to upload tools for several architectures before bootstrapping?
<niemeyer> rogpeppe: You can't. There's no where to upload to before bootstrap happens.
<rogpeppe> niemeyer: if there was a separate command that could only run after bootstrap, in addition to the --upload-tools flag on bootstrap, that could work
<rogpeppe> niemeyer: something to think about anyway. for the time being, i'll just do the flag on bootstrap.
<rogpeppe> niemeyer: thanks for the input.
<niemeyer> rogpeppe: Maybe, but it might work fine to have --upload-tools as a flag of upgrade-juju as well
<niemeyer> rogpeppe: Hmm.. not sure
<niemeyer> rogpeppe: Maybe upload-tools would be cleaner
<rogpeppe> niemeyer: yeah, not sure either.
<rogpeppe> niemeyer: it seems like it's mostly orthogonal
<niemeyer> rogpeppe: Yeah, you're right
<niemeyer> Okay.. really need some sleep now
<rogpeppe> niemeyer: yeah, you do!
<rogpeppe> niemeyer: yeah, you do!)
<rogpeppe> :-)
<niemeyer> rogpeppe: Have a great day :)
<niemeyer> and talk to you in a few hours maybe
<rogpeppe> niemeyer: yeah
<rogpeppe> niemeyer: sweet dreams..
<niemeyer> Thanks man
 * niemeyer => sweetness
<wrtp> niemeyer: mornin'
<wrtp> niemeyer: sleep well? :-)
<niemeyer> wrtp: Heya!
<niemeyer> wrtp: Yep
<niemeyer> wrtp: All good, thanks
<niemeyer> wrtp: How was the day?
<wrtp> niemeyer: not quite as productive as hoped. i'm a little unsure about the best way to proceed.
<wrtp> niemeyer: i might change the proposal i made earlier - i'm currently thinking that an "uploadTools" bool argument to Bootstrap might be better.
<niemeyer> wrtp: We might have both, I suppose
<niemeyer> wrtp: Actually, hmm..
<wrtp> niemeyer: i think that's the best way
<niemeyer> wrtp: What's the background?
<wrtp> niemeyer: one current problem is that given the Environ interface, i'm forced to call PutFile before Bootstrap
<wrtp> niemeyer: i thought about having a separate "Create" call to create the environment's state, but that seems wrong.
<niemeyer> wrtp: Yeah, I see what you mean.. maybe the bool is the best solution.. let me think for a moment as well
<wrtp> niemeyer: the other problem is how to tell the Environ to do exact version matching.
<wrtp> niemeyer: i'm also trying to think forward with an eye towards constraints.
<niemeyer> wrtp: Talk me through it
<wrtp> niemeyer: my current idea is that if the uploadTools bool arg to Bootstrap is true, the environ creates the state appropriately and calls environs.UploadTools. it also triggers exact version matching (which will need to be an argument to the provisioning agent).
<wrtp> niemeyer: *and* (for the future) it adds the current architecture to the bootstrap constraints.
<wrtp> niemeyer: in addition to that, we'd have an upload-tools command to juju which would upload tools for the current version/arch with no other side effects.
<wrtp> niemeyer: how does that sound?
<niemeyer> wrtp: It sounds reasonable
<wrtp> niemeyer: cool.
<niemeyer> wrtp: Both things seem interlocked really
<niemeyer> wrtp: We can't bootstrap without the tools, and we can't upload the tools without bootstrapping
<niemeyer> wrtp: So it makes sense to have the chicken being fabricated to get started :)
<wrtp> niemeyer: yeah. well, maybe - it works ok on ec2, but maybe not with other providers as you've said
<wrtp> niemeyer: and we'd need another arg to bootstrap anyway to ask for an exact version.
<niemeyer> wrtp: Even EC2 itself may not work so well with that in the near future
<niemeyer> wrtp: Think environment storage moving onto the nodes
<Aram> hi folks.
<wrtp> niemeyer: how can that work?
<niemeyer> Aram: Heya
<wrtp> niemeyer: how can we know what node to contact without some storage?
<wrtp> Aram: hi
<niemeyer> wrtp: There are good ways to tag machines in EC2 nowadays
<wrtp> niemeyer: ah, yeah, that seems reasonable then. ec2 metadata becomes the storage...
<niemeyer> wrtp: Not really the storage..
<wrtp> niemeyer: the storage for the initial contact info
<wrtp> niemeyer: replacing zookeeper-addresses (or whatever the name is)
<niemeyer> wrtp: yeah.. this is really a lookup mechanism rather than actual storage
<niemeyer> wrtp: It'll actually be a lot nicer that way
<niemeyer> wrtp: We can even move into this direction today, I guess
<niemeyer> wrtp: Well, not yet
<niemeyer> wrtp: We'd still need the control bucket either way
<wrtp> niemeyer: yeah
<niemeyer> wrtp: But it'll be awesome to drop that ugly setting
<wrtp> niemeyer: although calling it "control-bucket" would soon be a bit wrong
<niemeyer> wrtp: Well, I hope we kill it for good
<niemeyer> wrtp: It'll also be a lot nicer to port it across providers
<niemeyer> Oops.. time to talk about juju here..
<wrtp> niemeyer: have fun
<wrtp> niemeyer: off for the day. see you tomorrow some time, hopefully.
<niemeyer> andrewsmedina: Heya
<niemeyer> andrewsmedina: How're things going there?
<andrewsmedina> niemeyer: hi
<andrewsmedina> niemeyer: I'm finishing the works in the networking for local provider
<andrewsmedina> niemeyer: rog did the review
<andrewsmedina> niemeyer: https://codereview.appspot.com/6099051/
<niemeyer> andrewsmedina: I noticed, that's awesome.. a bit rushy this week due to the event here, but I'm looking forward to checking it out
#juju-dev 2012-05-02
<TheMue> morning
<TheMue> davecheney: Welcome to the team.
<davecheney> hey there!
<TheMue> davAlready officially started or has it been the 01.06.?
<TheMue> davecheney: Oops, forgotten to press tab. ;)
<davecheney> yes, i started on the 1st
<fwereade> heya wrtp
<wrtp> fwereade: mornin'!
<TheMue> fwereade, wrtp: Morning
<fwereade> morning TheMue
<wrtp> TheMue: yo!
<TheMue> fwereade, wrtp: Will be funny in future, Dave just left the channel.
<wrtp> TheMue: yeah, big time difference...
<wrtp> TheMue: has he started now?
<TheMue> wrtp: Yes, yesterday.
<fwereade> where's he based?
<wrtp> fwereade: oz
 * TheMue is preparing his 12.04 upgrade.
<wrtp> TheMue: i did mine yesterday
<wrtp> TheMue: it was *not* hassle-free!
<wrtp> TheMue: but it finally worked
<TheMue> wrtp: Tell me more (while my backup is running).
<wrtp> TheMue: well, i set the upgrade process working. it got most of the way through, but at some point everything went blank. the machine wasn't down (i could switch to a terminal) but the window system was dead.
<wrtp> TheMue: then the recovery console came up, but i couldn't log in, so i tried "shutdown now" in a terminal, which hung, so i had to do a hard power down
<wrtp> TheMue: when i rebooted, all the installation process was borked half way through (the mouse didn't work and the resolution was about 640x480). then i had to run "apt-get install -f" and another command, which finally completed the installation, but again shutdown didn't work and i had to hard power down.
<wrtp> TheMue: after that, it seems to work.
<wrtp> TheMue: oh yes, when installing packages, it hung for an hour or two until i realised that i had to click on "Terminal" in the upgrade manager and press return to get through an interactive bit.
<TheMue> wrtp: Aha, good hints. So I'll see what happens here.
<fwereade> TheMue, wrtp: yeah, that "terminal" business was annoying; I started my upgrade just before going out for the night
<wrtp> fwereade: (yorkshire voice) it's a bloody disgrace!
<wrtp> fwereade: honestly, if i'd been a naive user, my install would've failed there. i wonder what my neighbour did...
<TheMue> Hehe.
<Aram> moin.
<TheMue> wrtp: So far it looks like everything worked fine here.
<wrtp> TheMue: cool, have you rebooted into precise then?
<TheMue> wrtp: Yes, looks good so far. First a bit higher load, but now it's ok.
<wrtp> TheMue: guess it's just my machine then
<wrtp> TheMue: or my malign influence
<TheMue> wrtp: There are enough influencing factors.
<TheMue> wrtp: But you have to know that my Ubuntu runs in a VM, so always an optimal environment.
<wrtp> TheMue: i didn't know that. what do you run it inside?
<TheMue> wrtp: My machine is a MacBook Pro, and Ubuntu runs very good in VMware Fusion.
<TheMue> wrtp: The Mac is large enough, i7-2720 and 8 Gig, so I can spend the half to Ubuntu. ;)
<wrtp> TheMue: ah, i thought you were using an ubuntu laptop. i guess you just didn't want to bring a mac laptop to a canonical event :-)
<TheMue> wrtp: I've been Apple user for a longer time now. Then I tried a Dell (you've seen it in January) but never really been happy with it. So I switched back.
<wrtp> TheMue: carmen has stolen my macbook now :-)
<TheMue> wrtp: Hehe, intelligent wife.
<wrtp> fwereade: you've got a review
<wrtp> fwereade: you might wanna have a look at https://codereview.appspot.com/6145043/
<fwereade> wrtp, tyvm
<fwereade> wrtp, btw, um, a review for what?
<wrtp> fwereade: https://codereview.appspot.com/6163044/
<fwereade> wrtp, thanks (sorry, mail just arrived)
<fwereade> wrtp, LGTM
<wrtp> fwereade: thanks
 * TheMue is happy to see that his Sublime Text works fine with HUD
 * wrtp will probably never use the HUD except when he hits "Alt" by mistake...
 * wrtp ordered a new phone this morning. indulgent i am.
<fwereade> wrtp, cool, what are you getting?
<wrtp> fwereade: HTC One S
<fwereade> wrtp, oo, shiny :)
<wrtp> fwereade: am a little concerned about battery life and it's annoying it won't take an extra SD card, but looks fairly good otherwise. it was a snap decision, might be a bad one!
<fwereade> wrtp, objectively, everything you can possibly buy it crap: consider your decision a blow against the confusopoly
<wrtp> fwereade: serves me right for not doing full research before calling vodafone ("we've got a good deal for you but only if you choose NOW")
<fwereade> wrtp, grar
<wrtp> fwereade: yeah. every phone i've ever got has been a snap decision. and i've mostly been happy (that nokia thing was dreadful though)
<wrtp> fwereade: yeah, i do hate it when they do that. and maybe her "i'll just go and speak to my manager" was code for "i'll just put you on hold for a couple of minutes and then give you the standard deal"...
<TheMue> wrtp: You don't think they talk to their manager especially for you? (wonder)
<wrtp> TheMue: well, i *think* she probably did, 'cos she seemed quite reasonable and informed etc, but you just can't tell.
<TheMue> Reminds me when we bought our kitchen. To get the optimal deal she had to talk to her manager too. Afterwards you smelled the cigarette smoke, she just made a pause.
<TheMue> The listed prices for kitchens here are almost always about the double of the price you can negotiate.
<wrtp> TheMue: yeah. not sure if that's true for IKEA though.
<TheMue> wrtp: No, there it isn't.
<TheMue> wrtp: But regular IKEA prices and negotiated trading prices are almoste the same. We designed our kitchen with the IKEA design application and got a part and price list. But then they changed the front design, so we took a look at a different trader (with that list in our hand).
<TheMue> wrtp: And we got a better kitchen for almost the same price (including delivery and installation).
<wrtp> TheMue: interesting. we're quite happy with our ikea kitchen actually. carmen spent long hours with that pretty crappy design app. i think she ended up with 19 possible versions!
<TheMue> wrtp: Hehe, indeed, a crappy app. I also needed some time to enter Carmen and my ideas.
<TheMue> Aaaaargh!
<TheMue> Enjoyed working on the veranda so far. Now a neighbor started his lawnmower.
<fwereade> wrtp, TheMue: the hook commands which produce output default to a "smart" mode which basically just (python) str()s the result
<fwereade> wrtp, TheMue: any thoughts on what we should be doing there?
<wrtp> fwereade: is smart mode intended for readability or program parseability?
<fwereade> wrtp: if it were meant to be parseable, I'd expect a repr()
<fwereade> wrtp, but frankly str() of a big dict ain't all that readable itself
<wrtp> fwereade: i suppose that we should really produce an identical version because people will be parsing the result and using in scripts
<fwereade> wrtp, so... I have no idea what the point is really :(
<wrtp> fwereade: but if it was up to me i'd provide a way to produce JSON ouput
<fwereade> wrtp, that's already there
<TheMue> wrtp: +1
<fwereade> wrtp, I have a lurking suspicion that nobody uses the default output mode for anything at all
<wrtp> fwereade: it would be nice to be able to grep across all known charms...
<fwereade> wrtp, hmm, does charmtools getall (or whatever it is) still work, I wonder
<wrtp> fwereade: thing is, it's never going to be used interactively, so there's no really use for a pretty output AFAICS
<fwereade> wrtp, well, there's always debug-hooks
<wrtp> fwereade: ah, i'd forgotten about that
<fwereade> wrtp, but I did indeed always wonder just how smart the "smart" formatting really was
<wrtp> fwereade: it would be nice to do something that actually looks decent. but not if it breaks charms - i don't think it would be worth it for that.
<fwereade> wrtp, well, the worry is that I'll end up sidetracking myself into writing "encodings/python_useless"
<fwereade> effectively
<wrtp> fwereade: yeah, i totally see that
<wrtp> fwereade: i'd ignore it for now
<wrtp> fwereade: it can be one of the last things we do before "feature parity" probably.
<fwereade> wrtp, hmm, that sounds like a good idea to me
<fwereade> wrtp, cheers :)
<wrtp> fwereade: np
<wrtp> fwereade: i was just contemplating the checkContext method
<fwereade> wrtp, oh yes?
<fwereade> wrtp, feels a touch clunky to me, improvements welcome :)
<wrtp> fwereade: i wonder if it might be better to have newX functions to create commands.
<wrtp> fwereade: then they can check the context for themselves
<fwereade> wrtp, hmm, maybe... I can't really tell which is better tbh
<fwereade> morning niemeyer
<wrtp> e.g. func NewConfigGetCommand(ctx *Context) (cmd.Command, error)
<wrtp> niemeyer: hiya
<wrtp> fwereade: then all the commands could go into a table...
<fwereade> wrtp, sorry, expand a little there?
<niemeyer> fwereade, wrtp: Heya!
<wrtp> fwereade: var commands = map[string]func(*Context)(cmd.Command, error) {"config-get": NewConfigGetCommand, etc...}
<fwereade> wrtp, true, that's nice... I guess it saves a couple of lines of code :)
<wrtp> fwereade: the table is static, so it's more obviously regular too, i think.
<fwereade> wrtp, yeah
<fwereade> wrtp, ok, SGTM
<wrtp> fwereade: and it means the command type can go... for now :-)
<fwereade> wrtp, I need to be off in a very short mo; I'll just -wip that and propose the followup, which should be independent
<fwereade> wrtp, even better
<fwereade> wrtp, cool
<wrtp> fwereade: great
<fwereade> g'night all, happy coding; would appreciate your thoughts on https://codereview.appspot.com/6165044
<wrtp> fwereade: g'night
<SpamapS> hazmat: hey, whats the plan on landing strong ACL support in galapagos? I'd like to get it in sooner rather than later.
<andrewsmedina> why does juju use local-hostname/public-hostname instead local-ipv4/public-ipv4 to get ip address metadata for ec2 provider?
<niemeyer> andrewsmedina: Because the ec2 provider isn't the only one, and we may not have ipv4 for a machine
<andrewsmedina> niemeyer: But, In the OpenStack, hostname is not always resolved
<niemeyer> andrewsmedina: How's that an issue?
<andrewsmedina> niemeyer: I'm using juju with openstack. And in my openstack setup I will use only private ips, with that hostnames, public ips, cant be resolved
<niemeyer> andrewsmedina: How's that an issue?
<niemeyer> andrewsmedina: IPs aren't resolved..
<niemeyer> andrewsmedina: Charms can just use the private address to intercommunicate
<andrewsmedina> niemeyer: hmm
<SpamapS> andrewsmedina: isn't it 'address'?
<SpamapS> andrewsmedina: not "hostname" ?
<SpamapS> So its expected that it may be a host name or an IP
<SpamapS> andrewsmedina: most charms use private address to do almost everything anyway
<andrewsmedina> SpamapS: nice, I will try use private address in my charms
#juju-dev 2012-05-03
<wrtp> fwereade: goat moaning
<fwereade> wrtp, baaaaa
<TheMue> mornin'
<fwereade> heya TheMue
<TheMue> fwereade: Today St Peter does his work to not make Roger jealous. It's raining here.
<fwereade> TheMue, haha
<wrtp> TheMue: yo!
<wrtp> TheMue: it's dry here, at least
<wrtp> TheMue: but still grey
<wrtp> and cold
<TheMue> wrtp: Temperature dropped by about 10Â° and rain has replaced sun.
<fwereade> wrtp, good advice last night, I think it's much better: https://codereview.appspot.com/6163044/
<wrtp> fwereade: cool. yeah, i think that does look cleaner, thanks.
<fwereade> wrtp, btw, I think it *is* a getCommand not a command -- but do you have a better name in mind?
<fwereade> wrtp, maybe commands should be commandGetters
 * wrtp has another look
<fwereade> wrtp, alittle voice from my misspent youth is telling me I should use the word "factory", but I'm trying to pretend I can't hear it ;)
<wrtp> lol
<wrtp> fwereade: perhaps the word "new" should come into it somewhere
<fwereade> wrtp, not so sure, I feel that "new" binds too weakly
<fwereade> wrtp, "newCommands" sounds to me like it's doing something diferent
<wrtp> fwereade: newc ? cmdMaker ? plain "new" ?
<wrtp> cf ? :-)
<fwereade> wrtp, "constructors"?
<wrtp> fwereade: i don't mind the name "commands". it's just "getCommand" that reads oddly to me.
<wrtp> fwereade: it sounds more like it's getting a command that already exists rather than constructing a new one
<fwereade> wrtp, a method called "GetCommand" that chooses a "getCommand" helper and returns its result feels pretty natural to me
<fwereade> wrtp, hmm, yeah
<wrtp> fwereade: or that it's an actual command that gets something
<fwereade> wrtp, maybe the method itself should be NewCommand(string name)
<wrtp> fwereade: +1
<fwereade> wrtp, then "newCommands" and "newCommand" feel good
<fwereade> wrtp, cool
<fwereade> thanks :)
<wrtp> fwereade: tbh in such a local context, a short variable name would read well, i think.
<wrtp> fwereade: then you have to look elsewhere for context, and you'll see the type of commands etc
<fwereade> wrtp, "c" it is then :)
<wrtp> fwereade: or "f"? given it's a function not a command.
<fwereade> wrtp, +1
 * wrtp likes "NewCommand"
<wrtp> fwereade: LGTM
<fwereade> wrtp, cheers
<fwereade> wrtp, TheMue: should be utterly trivial: https://codereview.appspot.com/6184044
<TheMue> *click*
<fwereade_> morning niemeyer
<niemeyer> fwereade_: Hey man!
<niemeyer> fwereade_: Good timing, I was "talking to you" :-)
<fwereade_> niemeyer, dammit, is my astral projection wandering again?
<fwereade_> niemeyer, what can I do for you?
<niemeyer> fwereade_: Was just answering https://codereview.appspot.com/6124051/
 * fwereade_ reads...
<fwereade_> niemeyer, fair enough :)
<fwereade_> niemeyer, although I'm not sure how to communicate exit codes with that model
<niemeyer> fwereade_: Hmm
<niemeyer> fwereade_: I almost suggested before that the Main function in cmd should actually return an error instead, precisely for that reason
<fwereade_> niemeyer, I'd kinda prefer to keep the exit codes meaningful really
<niemeyer> fwereade_: Maybe that's what we should do.. there's a single situation we return an error different than 1, right?
<fwereade_> niemeyer, Init errors return 2
<fwereade_> niemeyer, which could really be caused by anything under the general heading of parsing
<niemeyer> fwereade_: Ok.. anyway.. we can easily get going by merely returning the code from Main
<fwereade_> niemeyer, ok, sounds good
<niemeyer> fwereade_: I've used both of these schemes in other applications before
<niemeyer> fwereade_: Both the early dying and the return err
<niemeyer> fwereade_: Returning errors generally ended up producing better code
<fwereade_> niemeyer, except, hmm again: a code of 1 could come either from the "real" command, or from jujuc, and I think we only want to print jujuc help in the latter case
<niemeyer> fwereade_: Let's please never print help unless the user asks for it
<fwereade_> niemeyer, in practice I think that means either "never print help" or "complicate things an irritating amount to detect the cases in which the user asked for help and didn't get it from the real command"
<niemeyer> fwereade_: I don't understand..
<niemeyer> fwereade_: If the user asked for help and didn't get from the real command, that's a bug in the real command
<fwereade_> niemeyer, I'm thinking of the case in which we couldn't talk to the agent for whatever reason
<niemeyer> fwereade_: "error: couldn't talk to the agent"?
<fwereade_> niemeyer, that was the original intent: that the various things that can go wrong at this end print an appropriate error *and* an explanation of what's going on; otherwise everything is handled through the stuff returned by the agent
<niemeyer> fwereade_: I understand, but I don't think explaining how things work every single time is necessary, or even nice
<niemeyer> fwereade_: an error message explaining what went wrong should be good enough
<niemeyer> fwereade_: ssh => invalid password; vs. ssh => invalid password + ssh is an application that requires a password to be set blah blah blah
<fwereade_> niemeyer: so I should change this everywhere?
<niemeyer> fwereade_: Do we have other locations doing the same thing?
<niemeyer> fwereade_: I recall making the same comment elsewhere before
<fwereade_> niemeyer, parse errors print help; others don't
<niemeyer> fwereade_: I'm personally less concerned about those, but I don't mind if you decide to change them to behave that way as wel
<niemeyer> l
<fwereade_> niemeyer, jujuc seemed to me to be a special case in that an error out of jujuc itself implies extreme brokenness
<fwereade_> niemeyer, personally I like getting help when I type utter gibberish
<fwereade_> niemeyer, but that's by the by
<niemeyer> fwereade_: Sure, if you want to keep showing help on parse errors, I personally don't mind either. That said, I don't *miss* the help, even in those cases
<fwereade_> niemeyer, assumed context for printing help when jujuc is run in a bad environment is "users should only actually see this when they try to run jujuc on its own"
<niemeyer> fwereade_: I prefer a one-liner error saying "error: you mistyped --gibberush" than "blah blah blah blah blah blah blah.. and you mistyped --gibberush"
<niemeyer> fwereade_: "error: jujuc must be called from its symlinks"?
<fwereade_> niemeyer, hmm, ok, how about printing that iff filepath.Base(os.Args[0]) == "jujuc"
<fwereade_> niemeyer, and otherwise not offering help (except as supplied by remote commands ofc)
<niemeyer> fwereade_: Sounds good
<fwereade_> niemeyer, cool, thanks
<niemeyer> fwereade_: Thank you, and sorry for not being very clear about those ideas before
<wrtp> niemeyer: hiya
<wrtp> niemeyer: any chance of a review of https://codereview.appspot.com/6159044/ ?
<niemeyer> wrtp: Ah, sorry.. I half-reviewed it already but forgot to finish yesterday
<wrtp> niemeyer: np. i'm working on another s3 branch which i'd hoped would be independent but turns out it's not.
<niemeyer> wrtp: LGTM
<wrtp> niemeyer: cool, thanks.
 * TheMue will next propose the relation methods of topology. The code is even more compat than Py. Nice.
<niemeyer> TheMue: Sweet
<TheMue> niemeyer: Yes, indeed. But I still would like you to tell me a bit more about the relations concept next week. Inside topology it's simple, but I'm still feeling unsecure regarding the relation manager etc.
<niemeyer> TheMue: SOunds good
<TheMue> niemeyer: Thx
<jimbaker> hazmat, were you able to take a look at lp:~jimbaker/juju/unit-rel-lifecycle-start-invoker ? (https://codereview.appspot.com/6131061/)
<hazmat> jimbaker, checking it now
<jimbaker> hazmat, thanks
<fwereade_> gn all
<wrtp> fwereade_: see ya tomorrowq
<wrtp> niemeyer: just discovered a slight oddity in the s3 package
<wrtp> niemeyer: is it right that the ETag has double-quotes around it?
<wrtp> niemeyer: the tests check that, but it seems odd
<niemeyer> wrtp: I don't really have memories about that anymore, but if it's there, it means I've probably seen somewhere at least
<wrtp> niemeyer: yeah, it seems weird, but i'll add a bit of documentation for others to note.
<niemeyer> wrtp: Worth testing
<wrtp> niemeyer: yeah, it bowled out an error in my test server.
<TheMue> Hmm, today my net is pretty unstable.
<wrtp> niemeyer: https://codereview.appspot.com/6185044
<wrtp> niemeyer: sorry about the gofmt noise, but i think it's easy to see when a file has only changed white space
<wrtp> right, good moment to stop.
<wrtp> see y'all tomorrow
<niemeyer> wrtp: Cool
<niemeyer> wrtp: have a good time
<hazmat> jimbaker, sorry got de-railed for scale test demo of juju on 10k nodes..
<jimbaker> hazmat, no worries
<fwereade_> niemeyer, if you have a moment, I'm not sure what the issue is with contexts
<niemeyer> fwereade_: I'm not sure either, so happy to talk if you're not sleeping yet :)
<fwereade_> niemeyer, there is server.Context and cmd.Context which are entirely distinct
<niemeyer> fwereade_: The basic question is: how come we have two contexts, and one of the is completely ignored?
<niemeyer> fwereade_: Yes, and that's confusing.. why there are two contexts and why is one of them getting into the function without being used?
<fwereade_> niemeyer, the cmd.Context isn't used if we happen not to use stdout/stderr/working-dir
<niemeyer> fwereade_: It's quite possible that it's totally fine, but the fact I'm confused is at least an indicator that it'll not be obvious why that exists in a bit
<fwereade_> niemeyer, I think/hope that the source of the confusion is the fact that we have 2 different typs in different packages whih have the same name and end up being used close to one another
<niemeyer> fwereade_: It's more than that.. we have two different types, in different types, used for the same thing, and getting to the context in two different ways, with one of them being entirely ignored at times
<niemeyer> s/in different types/in different packages/
<fwereade_> niemeyer, how are they being used for the same thing?
<fwereade_> niemeyer, one of them is about th ee
<niemeyer> fwereade_: Maybe I'm just confused then
 * niemeyer looks at the code
<fwereade_> niemeyer, ...the execution environment, and one of them is about the state we're working against
<fwereade_> niemeyer, I haven't found it confusing myself *but* that's no doubt because I'm in an extremely privileged position there
<fwereade_> niemeyer, the name "Context", now I think of it, is not much better than "DataManager"
<niemeyer> fwereade_: DataManager is as good as Struct :-)
<fwereade_> niemeyer, I had hoped the cmd.Context/server.Context distinction would be considered appropriately gotastic, but clearly that's not the case ;)
<niemeyer> fwereade_: and the issue may come out of the naming indeed.. by definition, a thing executing in two contexts at the same time is unexpected
<fwereade_> niemeyer, part of me wanted to call server.Context something like server.State, but then we hit the same issue with state.State
<niemeyer> fwereade_: Their name is ok in isolation, but the moment you have two contexts at once, something feels wrong
<niemeyer> fwereade_: The issue is partly in my head indeed, anyway
<niemeyer> fwereade_: Don't worry about it.. the issue is less important than I though
<niemeyer> fwereade_: We can find a better name later if necessary
<niemeyer> fwereade_: I'll review the code again in light of your explanation, thanks
<fwereade_> niemeyer, cool, thanks -- I was toying with explicit cmdCtx/serverCtx throughout, but that felt painful too
<fwereade_> niemeyer, suggestions appreciated :)
<niemeyer> fwereade_: I don't have any good suggestions either right now.. if I come up with something that you might like I'll post in the review
<niemeyer> fwereade_: Thanks, and sorry for the noise
<fwereade_> niemeyer, cheers
<fwereade_> niemeyer, and no worries :)
<fwereade_> ttyl
<niemeyer> fwereade_: Ok, got a more concrete suggestion that might take away the minor confusion
<niemeyer> fwereade_:  Review is up, with an associated LGTM assuming you're happy with it too
#juju-dev 2012-05-04
<fwereade_> morning wrtp
<wrtp> fwereade_: hiya
 * wrtp is somewhat grazed
<fwereade_> wrtp, bike ride complications?
<wrtp> fwereade_: yeah, coming down hill, did a spectacular slide on a bend.
<fwereade_> wrtp, heh, at least it was spectacular, I suppose ;)
<wrtp> fwereade_: yeah, as is the scrape on my hip!
<fwereade_> wrtp, ouch :(
<wrtp> fwereade_: at least the bike's ok too
<fwereade_> wrtp, jolly good
<fwereade_> wrtp, when I was commuting by bike it surprised me how much the various associated expenses added up
<wrtp> fwereade_: yeah. i think i'm going to cheat and leave it at a shop to be serviced this week...
<wrtp> fwereade_: when are you arriving at UDS BTW?
<fwereade_> wrtp, er, lunchtime-ish on sunday
<fwereade_> wrtp, and you?
<wrtp> fwereade_: sat afternoon
<fwereade_> wrtp, ah, nice
<wrtp> fwereade_: got the new phone yesterday afternoon. it *is* shiny.
<fwereade_> wrtp, cool
<fwereade_> wrtp, I'm vaguely hoping to hold off on new phones until I can get one with ubuntu for android
<fwereade_> wrtp, but, well, my current one is hanging by a thread
<fwereade_> wrtp, we'll see how long it lasts :)
<wrtp> fwereade_: that would be quite cool.
<wrtp> fwereade_: you could probably do that on this one if you jailbroke it
<fwereade_> wrtp, probably not on an iphone
<fwereade_> wrtp, well, ok, probably it's *possible*, but it strikes me as hte sort of monstrous horror-project that is not worth the bragging rights ;)
<fwereade_> oo, breakfast :)
<TheMue> morning
<wrtp> TheMue: hi!
<TheMue> wrtp: Hi
<TheMue> Hmm, the San Francisco photo walk would be nice, but we have our team dinner on Wednesday.
<fwereade_> wrtp, TheMue: I'm trying to figure out what the best thing to do next is
<fwereade_> wrtp, TheMue: it seems to me that it might have to be constraints
<TheMue> fwereade_: Wouldn't conflict with me. I'm currently doing relations in topology.
<fwereade_> wrtp, TheMue: because that's needed for bootstrap/PA parity
<fwereade_> TheMue, ok, cool -- I guess we'll both be messing around in state for that at some stage but I can certainly start off without conflicts
<fwereade_> wrtp, TheMue: *however* I suspect that conversations at UDS will affect precisely how we go ahead with it
<TheMue> fwereade_: Yes, think so too.
<fwereade_> wrtp, TheMue: so I'm open to, and sl. in favour of, other suggestions as well
<wrtp> fwereade_: i wondering if constraints is quite close to what i'm on currently
<wrtp> fwereade_: i'm thinking that agent stuff might be a better area
<wrtp> fwereade_: because i'm soon to have go stuff running on the server side and we can't do anything without agents
<wrtp> fwereade_: although i have to say i was probably going to move on to the provisioning agent next
<fwereade_> wrtp, hmm, ok, the PA is definitely the first one we need
<fwereade_> wrtp, do we have any upstart-conf-generating stuff yet?
<wrtp> fwereade_: nope
<fwereade_> wrtp, that should hopefully be quite self-contained then, I'll take a look at that
<wrtp> fwereade_: is that done by the machine agent?
<fwereade_> wrtp, it's done by and for several things -- MA will create them for UA, PA will I think create them for the MA and send them over in cloud-init
<fwereade_> wrtp, bootstrap should do so for the PA (plus the MA)
<fwereade_> wrtp, it's not strictly necessary in any of those cases, we did fine for a while just starting the processes
<wrtp> fwereade_: interesting, i hadn't seen any of that (perhaps upstart stuff was added after i ported the basic provider cloudinit stuff?)
<fwereade_> wrtp, but dropping them in shouldn't be too painful
<fwereade_> wrtp, quite possible, it was "done" back in oct/nov but I'm not sure it actually went in until jan :/
<wrtp> fwereade_: so the purpose of using upstart is... to restart things when they fall over?
<fwereade_> wrtp, yeah, and when the machine reboots, etc
<wrtp> fwereade_: of course. do we reboot machines now?
<fwereade_> wrtp, we should be able to, yeah, nobody's complained about it not working
<mthaddon> hi there, I'm interesting in finding out where I would find the most up to date version of a particular charm - haproxy in this case - would that just be lp:~charmers/charms/precise/haproxy/trunk ?
<fwereade_> haha, cute: "There are 16 methods of Regexp that match a regular expression and identify the matched text. Their names are matched by this regular expression: "
<wrtp> fwereade_: :-)
<fwereade_> wrtp, would you consider `func (s *Service) pid() int`, in which -1 signals an error, to be evil? (when none of the clients care what the error is, and (int, err) therefore ends up feeling somewhat redundant)
<wrtp> fwereade_: i don't mind it, but it's possible gustavo might...
<wrtp> fwereade_: i don't mind in-band errors when reasonable
<wrtp> fwereade_: and that seems fine to me.
<wrtp> fwereade_: strings.Index is another example
<fwereade_> wrtp, cool, we'll see how it flies :)
<fwereade_> wrtp, in-band is ok if it's out-of-domain, as it were
<wrtp> not sure i get that... out-of-domain?
<fwereade_> wrtp, hm, I should probably have said "range", been a long time since I did maths ;)
<wrtp> fwereade_: ah, yeah.
<wrtp> fwereade_: out of range and no other information added
<TheMue> Ha! state has now 100 tests. Yeah.
<wrtp> i *thought* the time was earlier than i expected... then i looked at a phone and found out that the clock at the top of my desktop has been stopped for 4 hours!
 * wrtp steels himself to factory-wipe his old phone
<TheMue> wrtp: Hehe. You've got to show me on Monday.
<wrtp> TheMue: new one is really quite nice
<wrtp> TheMue: only real downer is i can't plug in extra SD card memory
 * TheMue waves, we'll see on Monday. Today Carmen and I have a dinner, tomorrow is our 23rd wedding day.
<wrtp> TheMue: when do you arrive in Oakland?
<TheMue> wrtp: Sunday at about 12:20.
<TheMue> wrtp: Eh, that's SFO. I think it will be about 14:00 at the hotel.
<wrtp> i wonder where those "chunk mismatch" errors come from
<TheMue> wrtp: We should talk about our workflow. Perhaps you have a different order or other steps that you do.
<wrtp> TheMue: i haven't managed to work out any common factors
<wrtp> TheMue: i only get that error very occasionally
<TheMue> wrtp: Strange, indeed.
<wrtp> TheMue: and the diffs have uploaded fine
<TheMue> wrtp: When will you arrive? Early enough for Marks talk?
<wrtp> TheMue: saturday afternoon.
<wrtp> TheMue: early enough to go to Jono's heavy metal gig :-)
<TheMue> wrtp: Ah, hehe, have fun. And we'll meet at the talk.
<wrtp> TheMue: deffo
<TheMue> wrtp: I'm off now for a fine Steak. Yeah.
<fwereade> need to be off for now; see you all in oakland :)
<wrtp> fwereade: see you soon!
<wrtp> right, i'm off. next stop UDS! look forward to seeing everyone again.
#juju-dev 2012-05-05
<wrtp> fwereade: yo!
#juju-dev 2013-04-29
<davecheney> fwereade: ping
<fwereade> davecheney, heyhey
<fwereade> davecheney, I'm not actually starting work for a little while but I wanted to check in
<fwereade> davecheney, ISTM that the ultra-short version of what's indicated by the load tests is "we need the internal api"
<fwereade> davecheney, but we should probably be investigating stuff like the presence pings too :/
<davecheney> fwereade: lemmie know when you are free for a chat with our vocal cords
<davecheney> fwereade: there are some details which aren't captured in your summary, but that is a good first order approximation
<davecheney> Looking for a review of https://code.launchpad.net/~dave-cheney/juju-core/120-upstart-add-limit-verb/+merge/161275 and https://code.launchpad.net/~dave-cheney/juju-core/119-fix-mongo-ulimits/+merge/161276
<TheMue> morning
 * davecheney waves
<davecheney> fwereade: le ping
<fwereade> davecheney, la pong
<fwereade> davecheney, g+?
<fwereade> TheMue, rogpeppe1, heyhey
<rogpeppe1> davecheney, fwereade, TheMue: yo!
<davecheney> fwereade: i'll pop upstairs
<TheMue> davecheney, fwereade, rogpeppe1: moin
<fwereade_> wallyworld_, ping
<rogpeppe1> davecheney: just out of interest, do you know how big allmachines.log ended up in your scale tests?
<davecheney> rogpeppe1: 40-50mb
<davecheney> uncompressed
<davecheney> your changes really helped
<davecheney> but in r 1197 i think some debug logging leaked back in
<rogpeppe1> davecheney: thanks. and that's after 8 hours or so?
<davecheney> rogpeppe1: yup
<davecheney> 1200 machines * 3 hook invocations each == not that much
<rogpeppe1> davecheney: yeah
<fwereade_> rogpeppe1, davecheney: another thing we could do is to stop logging *all* the hook output and just report what's juju-logged
<rogpeppe1> fwereade_: actually, i think it's really very useful to see all the hook output
<fwereade_> rogpeppe1, davecheney: possibly store it, and log it if there's an error, but discard on success
<rogpeppe1> fwereade_: yeah, that could work
<davecheney> rogpeppe1: i like that
<davecheney> we don't need to see the hook output in real time
<davecheney> well, I don't
<davecheney> maybe the debug-log users do need it
<davecheney> maybe we need some feedback from the charmers
<fwereade_> davecheney, I can imagine there are some cases peole would expect it, yeah
<fwereade_> davecheney, oh, for real log levels and dynamic switching between them...
<davecheney> fwereade_: please, don't mention the war
<fwereade_> davecheney, I think it is a reasonably high priority, but yeah, it's not something I really want to think about *today*
<dimitern> fwereade_, rogpeppe1: https://codereview.appspot.com/8540050 - PTAL
<davecheney> https://codereview.appspot.com/8797046/
<davecheney> https://codereview.appspot.com/8551048/
<dimitern> davecheney: looking
<davecheney> https://codereview.appspot.com/8540050#msg3 << bzzt
<dimitern> davecheney: what? they changed the not lgtm color to yellow?
<davecheney> dimitern: the matcher is really stupid
<davecheney> you can do
<davecheney> 'there is no way in hell I would consider giving this a LGTM'
<davecheney> and it will turn green
<dimitern> :) yeah
<dimitern> davecheney: 2 LGTM s from me you have
<davecheney> dimitern: thanks mate
<davecheney> maybe I should just hard code the values we need
<davecheney> rather than gold plating it
<dimitern> davecheney: it's good they're not hard coded I think - less cryptic imo
<davecheney> dimitern: yeah, that is what I wanted, constant at the top, impl down the bottom
<davecheney> rogpeppe1: fwereade_ dimitern was any decision made on backporting fixes to 1.10.x ?
<davecheney> I have been keeping this page https://docs.google.com/a/canonical.com/document/d/1jIxqV-4c06GWIfCpB0pb8nS4frFhKY0k3WwXj5QdytQ/edit
<rogpeppe1> davecheney: there are a few things that would be worth fixing in 1.10 i think
<fwereade_> davecheney, excellent, I will start using that
<davecheney> rogpeppe1: would you add them to thise page
<rogpeppe1> davecheney: will do
<davecheney> unless there is something that LP will do this for us
<rogpeppe1> davecheney: what's the significance of 65000 BTW?
<davecheney> rogpeppe1: i cargo culted that from an issue that I found on the mongo support forum
<davecheney> probably 40,000 would be enough
<rogpeppe1> davecheney: you've got two more LGTMs (with minor comments on both)
<davecheney> rogpeppe1: ta
<rogpeppe1> davecheney: you might want to include a link to the support issue so at least we know which cargo cult you subscribe to :-)
<davecheney> ALL PRAISE GOOGLE
<davecheney> rogpeppe1: it's linked from the LP #
<rogpeppe1> davecheney: i'd prefer it in the source, 'cos that's where the number has ended up
<davecheney> rogpeppe1: will do
<davecheney> can't be bothered doing LP shit tonight
<davecheney> it's late
<fwereade__> dimitern, reviewed
<dimitern> fwereade__: cheers
<dimitern> fwereade__: well, you're basically rewriting the whole thing with your suggestion
<dimitern> fwereade__: if you know better, why didn't you do it yourself?
<fwereade__> dimitern, it's not always clear what I know until I have something to contrast it to
<dimitern> fwereade__: yeah, i understand, but i hope you understand my frustration as well..
<fwereade__> dimitern, I do :(
<fwereade__> dimitern, I *think* that what I suggested casts the code more neatly in terms of the problem it's solving, but I am actually open to arguments
<fwereade__> dimitern, there may well be a case I'm missing that you handle
<dimitern> fwereade__: what's not clear with my code, apart from having several flags? i tried to simplify it and comment various parts, so it's easier to follow
<fwereade__> dimitern, it's just that I found it hard to keep track of them all, and there seemed to be more than the problem demanded
<dimitern> fwereade__: hmm.. ok i'll use your suggested code
<fwereade__> dimitern, it may be that all the complexity is essential, because I've missed at least one case, and I'm undecided whether or nt it's important
<fwereade__> dimitern, can we have a G+ before we decide?
<dimitern> fwereade__: ok, i'll start one
<fwereade__> dimitern, sorry just started it
<fwereade__> https://plus.google.com/hangouts/_/a8d696030b277037ba7217906ea69be8e9f06a39?authuser=0&hl=en
<fwereade__> dimitern,
<wallyworld_> fwereade__: i was at soccer earlier, i can ping you after our standup if you like
<wallyworld_> jam: dimitern: standup?
<wallyworld_> fwereade__: standup finished if you are around
<fwereade__> wallyworld_, sgtm, would you start one?
<wallyworld_> ok
<fwereade__> wallyworld_, I just posted sketchy incomplete comments on https://codereview.appspot.com/8816045/, they will be relevant, but they representa WIP not final thoughts and are inelegant
<wallyworld_> ok
<wallyworld_> https://plus.google.com/hangouts/_/78899c984df96bf7b6eca6babca07fdad4628c4e
<dimitern> fwereade__: did the changes and all tests still pass :) - https://codereview.appspot.com/8540050
<fwereade__> dimitern, sweet
<fwereade__> dimitern, sorry, got to pick up laura now though
<dimitern> fwereade__: np, when you can
 * dimitern lunch
<davecheney> Roger Peppe (Juju Core Development, CDO) Roger has been doing great work supporting the Go-Juju port from its inception, and his recent work with the GUI team to implement all of the APIs they needed to port from Python to Go showed diligence, responsiveness, and is a great example of cross-team collaboration.
<davecheney> ^ good work rogpeppe
<rogpeppe> davecheney: thanks. that was a total surprise!
<davecheney> you deserve it
<rogpeppe> davecheney: ta!
<TheMue> rogpeppe: +1 from me too, have been happy to read it.
<TheMue> rogpeppe: btw, these thoughts match to yours about logging and error messages: http://joearms.github.io/2013/04/28/Fail-fast-noisely-and-politely.html
 * rogpeppe is back from lunch
<rogpeppe> TheMue: thanks
<mattyw> rogpeppe, also +1 from me, the api is very easy to work with
<rogpeppe> mattyw: glad to hear it!
<dimitern> hey guys, fwereade just texted me his internet connection is down currently
<mramm2> that sucks
<mramm2> my internet was down last week
<mramm2> roger
<mramm2> has had no internet in forever
<mramm2> we are having bad internet luck recently
<dimitern> mramm2: mine is fine, but both my mic and my camera are playing tricks
<dimitern> mramm2: so i'll just listen and type
<mramm2> ok
<mramm2> sounds like it will be a small meeting
<mramm2> rogpeppe: your internet working yet?
<rogpeppe> mramm2: nope
<rogpeppe> mramm2: i have a possible fix date though
<rogpeppe> mramm2: about 30 hours from now
<mramm2> perhaps we should just do the meeting here on IRC
<mramm2> or skip it for this morning
<dimitern> mramm2: +1 for both options
<mramm2> what are folks working?
<mramm2> I see frank is working on status output.
<rogpeppe> mramm2: i can do G+ through my phone and the expected ghost-like appearance...
<dimitern> let's do it here
<mramm2> yea, but dimitern is not able to G+ and fwreede has no internet, so it would be a small contingent
<TheMue> mramm2: i moved it just back, because i've got only today and tomorrow. i'm working on the blueprints.
<mramm2> TheMue: thanks
<mramm2> so that is the other thing, I'll be contacting you individually but I added your names to some blueprints on the eplic backlog
<dimitern> mramm2: i'm still on upgrade-charm --switch - will appreciate a review from rogpeppe; had a g+ earlier with william and fixed some things, simplified the  code
<mramm2> (true for all of you) and will need your help to fill them in with what we know on the subject already
<rogpeppe> dimitern: ah, will review
<mramm2> dimitern: nice
<mramm2> rogpeppe: thanks
<dimitern> (i think that was when g+ broke for me - gtalk plugin crashed and lost sound after restarting)
<TheMue> mramm2: yes, i've seen mine and i'm pretty fine with them (only right now no clue about the ssh provider)
<rogpeppe> i'm about to propose godeps into juju-utils, which should give us the ability to at least do *something* with our dependency management problems
<rogpeppe> or actually juju-utils is not the right place
<rogpeppe> probably launchpad.net/godeps
<mramm2> TheMue: that is just a provider that requires that you give ssh keys to an existing machine
<mramm2> which it will log into and take over
<TheMue> rogpeppe: if it is multi-purpose for any go-project then lp/godeps
<rogpeppe> TheMue: it is
<mramm2> rogpeppe: TheMue: agreed
<dimitern> rogpeppe: what's godeps?
<TheMue> mramm2: aha, ook, thx
<rogpeppe> dimitern: here's the usage information for it: http://paste.ubuntu.com/5616028/
<dimitern> rogpeppe: so this is basically gofix for imports?
<rogpeppe> dimitern: no. it doesn't touch source code.
<rogpeppe> dimitern: the idea is that when releasing you'd do (in juju-core) godeps > juju.deps
<dimitern> rogpeppe: ah, like pip freeze, nice!
<rogpeppe> dimitern: then when you want to build the release, you'd do: godeps -u juju.deps
<rogpeppe> dimitern: which would update all the dependencies to the expected versions
<rogpeppe> dimitern: i think we discussed something similar in atlanta
<dimitern> rogpeppe: yeah, and i thought hazmat and mgz did some python tool, but never heard what happened next
<rogpeppe> dimitern: i don't know either.
 * TheMue is happy about it. so far I helped myself with a simple little script that wonderful fails if new dependencies are added.
<hazmat> dimitern, rogpeppe  its very minimal, basically reads a frozen description and updates the tree to it lp:goreq
<rogpeppe> hazmat: how do you get the description?
<dimitern> hazmat: ah, thanks - will take a look
<hazmat> rogpeppe, text file, import name space rev per line
<rogpeppe> hazmat: so you'd manually do that?
<hazmat> rogpeppe, at the moment yes, but you could also generate a freeze
<hazmat> rogpeppe, ideally you could just spec a date,
<hazmat> and have tree updated to revs from the date
<mgz> you really want it manual
<rogpeppe> hazmat: i'm imagining that we'll bake the rev file into the tree
<rogpeppe> mgz: how so?
<mgz> well, some little tool to bump everything and run tests doesn't hurt
<hazmat> there's a nicer version of the src tree management in the juju-deployer work, that i want to copy over to goreq or setup as a common dep
<mgz> but the point is you want specific revs of the deps versioned in the main project
<mramm2> yea, requirements.txt is the part of pip that is critical, but pip freeze is *very* nice
<rogpeppe> mgz: i agree
<rogpeppe> mgz: is there a problem with the semantics of the command i outlined above?
<mgz> I don't completely understand it from that --help, but in short update is hard to do correctly (that shouldn't stop us making a start on this)
<rogpeppe> mgz: why is it hard?
<rogpeppe> mgz: i haven't done it yet, but i'm assuming it's as simple as "check it's clean; bzr update -r revno (or similar for other vcs's)"
<mgz> what does the tool do when I have a different branch checked out in bzr, or a different head in git, or am on a different branch in hg
<mgz> you need a vcs specific bits of sanity code
<rogpeppe> mgz: presumably the vcs will error that the particular revision id is not available
<mgz> this is mostly a problem where it's trying to do all the deps in one go, you'll generally expect someone to be able to resolve an issue in the branch that's cwd
<rogpeppe> mgz: i was not planning on using revision numbers BTW (except as extra info in the file for user consumption only)
<rogpeppe> mgz: i still don't quite see the issue, i'm afraid
<mgz> just knowing the working tree is clean isn't enough
<rogpeppe> mgz: i was planning on doing a cleanliness check in all the deps too
<rogpeppe> mgz: and making sure that they're all updated to the versions as specified in the deps file
<mgz> it's possible to say, update my feature branch of goose to a new trunk rev, but it's impoilite as it changes the meaning of the branch, and makes it annoying to get back my work
<mgz> the point is "cleanliness check" is hard, and hard in different ways for the different vcses
<rogpeppe> mgz: ah. i thought there was an equivalence of "bzr status" for all of them
<rogpeppe> mgz: git certainly has one. and hg too.
<rogpeppe> s/equivalence/equivalent/
<mgz> there is, but that only tells you about the working tree state, not about the branch
<rogpeppe> mgz: i take your point about changing the feature branch to something else
<rogpeppe> mgz: i think that's unavoidable unless we want to make a new copy of everything
<rogpeppe> mgz: which is one possible approach
<rogpeppe> mgz: another is: godeps > /tmp/old; godeps -u something.deps; go build; godeps -u /tmp/old
<rogpeppe> mgz: i.e. it makes it easy to save your current state and revert to it
<mgz> so, a script runnable by devs is reasonable I think, but we also want the export thing (it's basically what my tarball construction did)
<rogpeppe> mgz: yes, the export thing is orthogonal
<rogpeppe> mgz: and useful in any case
<rogpeppe> mgz: i don't think it has to be part of the same command
<mgz> there's nothing wrong with starting out dumb and improving it, there are just going to be some gotchas until it grows some vcs smarts
<mgz> it should consume the same file
<rogpeppe> mgz: possibly. although it could just consume go source. it doesn't really need to know anything about vcs
<mgz> it wants to know what revs of the deps need to go in the tar
<mgz> the deps file has that information
<rogpeppe> mgz: wouldn't this work ok? go deps -u v1.10.0.deps; goexport mypkg
<rogpeppe> mgz: i.e. it would just work with the go source as it finds it
<mgz> sure, so goexport is a thing?
<rogpeppe> mgz: it would still need to know about the dependency graph, but that's trivial with the go build package
<mgz> the point is that's hard to implement robustly
<rogpeppe> mgz: that's my provisional name for a non-hacky version of your script
<mgz> whereas it's easier to just export rather than update, then copy that
<mgz> because update is not a clean operation
<rogpeppe> mgz: hmm, possibly.
<rogpeppe> mgz: i'd kinda prefer to isolate the vcs smarts to one place though
<ahasenack> hi guys, is there an ETA for having this bug handled? https://bugs.launchpad.net/juju-core/+bug/1121907
<_mup_> Bug #1121907: deploy --config <cmdline> <juju-core:New> <https://launchpad.net/bugs/1121907>
<ahasenack> the option exists, but does nothing
<fwereade> ahasenack, funnily enough that one just reached the top of my list
<ahasenack> :)
<ahasenack> hm, in pyjuju, relation-list has the -r option:   -r RELATION ID
<ahasenack> does relation-list in juju-core has something similar? I'm looking through the code and docs
<ahasenack> or is it without the -r?
<fwereade> ahasenack, it seems as though it accepts it as an (optional) arg, not a flag -- I have no recollection of how this decision came about
<ahasenack> fwereade: you mean just foo.yaml, not --config foo.yaml?
<fwereade> ahasenack, ah sorry I'm talking about relation-list
<ahasenack> ah
<ahasenack> fwereade: yeah, I just hacked the charm I was testing to omit -r and it installed now
<ahasenack> fwereade: but you will need to put -r back if you want to remain compatible with pyjuju
<ahasenack> fwereade: https://bugs.launchpad.net/juju-core/+bug/1172895
<_mup_> Bug #1172895: relation-list incompatibility with pyjuju: -r <juju-core:New> <https://launchpad.net/bugs/1172895>
<fwereade> ahasenack, quite so
<ahasenack> initially I thought it was something like a short-name and long-name option thing
<ahasenack> like -r vs --relation-id, and only the long one was supported
<fwereade> ahasenack, consider that higher-priority than --config, because at least I can do that quickly and the code won't make me want to set fire to things ;p
<ahasenack> :)
<ahasenack> do you guys have a go syntax highlighting file for vim by any chance? Raring doesn't come with one it seems
<TheMue> ahasenack: it should be in the misc folder
<ahasenack> I'm searching the packages I have installed, no luck so far
<ahasenack> will grab it from the code repo then
<TheMue> ahasenack: i'm just looking
<TheMue> ahasenack: but i have to admit i've installed go directly ;)
<TheMue> ahasenack: did you also install vim-syntax-go?
<ahasenack> TheMue: oh, no, there you go
<ahasenack> TheMue: apt-cache search "go" doesn't help much, that package should be called vim-syntax-golang
<ahasenack> uh, it wants to install ruby
 * ahasenack --no-install-recommends
<TheMue> ahasenack: yes, would be better. the emacs file is named golang-mode, the kate file kate-syntax-go *hmpf*
<ahasenack> that package doesn't seem to work...
<ahasenack> the vim one
<TheMue> *sigh*
<ahasenack> going back to the one from the source, that worked
<ahasenack> http://go-lang.cat-v.org/text-editors/vim/ with these instructions
<fwereade> https://codereview.appspot.com/9015043 if anyone's interested
 * dimitern nags: https://codereview.appspot.com/8540050/
 * fwereade apologises to dimitern, just a couple of last points, just sent
<rogpeppe> dimitern, fwereade: i'm having difficulty persuading myself that this line is correct: if *newURL == *oldURL && !explicitRevision {
<fwereade> rogpeppe, it's not
<rogpeppe> fwereade: good, it's not just me then
<fwereade> rogpeppe, explicitRevision shouldn't come in at that point
<rogpeppe> fwereade: oh, interesting, i thought the other way round
<rogpeppe> fwereade: that the urls shouldn't come in at that point
<fwereade> rogpeppe, I don't think explicitRevision matter except when choosing whether to bump -- because bumping when a specific rev is already specified is clearly insane
<rogpeppe> fwereade: because i think i've persuaded myself that it may be useful to bump the revision even when the new url is different
<fwereade> rogpeppe, whereas that whole block is all about what happens if it seems there's actually nothing to upgrade to
<rogpeppe> fwereade: "bumping when a specific rev is already specified is clearly insane" - isn't that what the explicitRevision check is checking for?
<fwereade> rogpeppe, yes, it's meant to stop that happening, in my mind
<rogpeppe> fwereade: yeah. so we need explicitRevision in that test, no?
<fwereade> rogpeppe, definitely not
<fwereade> rogpeppe, what about a service running the latest charm from the charm store? *oldURL == *newURL
<fwereade> rogpeppe, so we want to get in there and hit "already running latest charm"
<rogpeppe> fwereade: hmm, yes, good point
<rogpeppe> fwereade: but the case i'm thinking of is something like this:
<rogpeppe> fwereade: we have a local copy of a charm, deploy it, push it to the charm store, switch to that, then make some changes to the local copy and switch back to that
<rogpeppe> fwereade: in that case, we want the version to be bumped, but the urls are different
<rogpeppe> fwereade: i'm wondering if the local charm test should be at the outer level
<fwereade> rogpeppe, hmm
<rogpeppe> fwereade: so we always bump version if the charm is local
<dimitern> rogpeppe: we bump when the target url will be the same as the source one, the repo is local and the charm is a dir
<fwereade> rogpeppe, I'm pretty sure that causes unwanted behaviour when upgrading to s epcific version
<fwereade> rogpeppe, hey, you have v33 and v34 in your repo
<fwereade> rogpeppe, you asked for 33 so I changed it to 34
<fwereade> rogpeppe, no need to thank me
<rogpeppe> fwereade: ah yes, we should bump version if the version isn't explicitly specified *and* it's local
<fwereade> rogpeppe, I *think* so yes
<dimitern> fwereade: that's exactly what this !explicitRevision case is there for
<fwereade> dimitern, but it breaks the no-newer-version-in-charm-store case, doesn't it?
<rogpeppe> dimitern: see my thought experiment above:
<fwereade> rogpeppe, dimitern: bumpVersion is crazy broken anyway though
<rogpeppe> fwereade: local charms are crazy broken :-)
<fwereade> rogpeppe, dimitern: true
<dimitern> fwereade, rogpeppe: i lean more and more towards having --bump-revision as explicit argument
<rogpeppe> dimitern: as it was originally
<rogpeppe> dimitern: yeah
<dimitern> and leave all the trickery to user's consent
<rogpeppe> dimitern: i think i agree
<fwereade> dimitern, rogpeppe: hmm
<rogpeppe> dimitern: don't be tricksy unless i specifically say so
<rogpeppe> then at least the user is aware that there's subtle magic going on
<dimitern> rogpeppe: exactly, even "dump" - if --bump-revision is specified, we *always* do it, if possible (local charm + dir)
<dimitern> s/dump/dumb/
<fwereade> rogpeppe, dimitern, if I had to I think I'd come down on the side of --no-bump-revision
<rogpeppe> fwereade: that doesn't really help
<rogpeppe> fwereade: we'd need all the same heuristics we have now
<dimitern> fwereade: yeah, don't see how this helps
<rogpeppe> fwereade: the nice thing about --bump-revision is that we can ditch all of 'em
<fwereade> rogpeppe, dimitern: it solves the use case, which --bump-revision doesn't really
<rogpeppe> fwereade: depends if you think that the magic is really worth it
<dimitern> fwereade: --bump-revision doesn't claim to solve anything - it's just an explicit request from the user
<fwereade> dimitern, then it's a regression
<rogpeppe> fwereade: with --bump-revision, the model is very clear
<rogpeppe> fwereade: so less surprises to the user, i think
<dimitern> yeah, my point exactly
<rogpeppe> fwereade: i agree that the user will have to type a bit more though
<rogpeppe> fwereade: but if they don't, they'll get an error message
<fwereade> rogpeppe, dimitern: and it breaks compatibility, doesn't it?
<rogpeppe> fwereade: really? did py juju do auto bump?
<fwereade> rogpeppe, yes
<fwereade> rogpeppe, it's hugely popular
<rogpeppe> aw shucks
<dimitern> fwereade: in a good way - compatibility shouldn't be absolute if it forces us to twist things like that
<rogpeppe> ok then, we have to do it. but let's get it right at least. (whatever that means :-])
<dimitern> fwereade: how can you tell that?
<fwereade> dimitern, in a "we no longer solve a big and expensive problem for users" way, really
<fwereade> dimitern, because they asked for it, and we gave it to them, and there was great rejoicing
<dimitern> fwereade: the users are not as dumb as we think they are perhaps
<fwereade> dimitern, because nobody spent 20 mins with N machines on the clock trying to figure out why their fixes didn't work any more
<dimitern> fwereade: getting a meaningful error when you go wrong is helpful, not getting an error and subtly screwing up in hard to determine ways is, imho
<dimitern> fwereade: so i'm asking again - do you want to take that one over and do it how you think it will be ok?
<fwereade> dimitern, I'm not regressing this unless we can determine a better story for charmers than the one we have
<dimitern> fwereade: i feel more and more out of my league here - doing things I'm not getting completely all the repercussions
<fwereade> dimitern, the user expectation is that, when upgrading from a local repo, a new charm will be bundled with a new revision
<fwereade> dimitern, I *think* that an explicit request for some other rev should probably override this though
<dimitern> fwereade: how about --revision # or --switch=blah-# ?
<rogpeppe> fwereade: s/a new charm/the charm/ ?
 * fwereade knows the feeling
<dimitern> fwereade: ok, so I think I covered that, didn't i?
<fwereade> dimitern, those are I think explicit revisions, but you do the test in the wrong place
<rogpeppe> dimitern: i don't think your code covers this scenario:
<rogpeppe> [16:54:14] <rogpeppe> fwereade: we have a local copy of a charm, deploy it, push it to the charm store, switch to that, then make some changes to the local copy and switch back to that
<dimitern> fwereade: where should the test be then?
<fwereade> dimitern, the whole `if *newURL != *oldURL` block is about eliminating situations that imply we can't bump
<dimitern> fwereade: or bump it if needed
<fwereade> dimitern, I'm just suggesting another check in that bails out with "already running that charm" if the rev is specified explicitly
<dimitern> fwereade: actually it's exactly when to bump, by default we never bump
<dimitern> fwereade: when? as a last check or first?
 * fwereade shrugs -- first maybe?
<dimitern> fwereade: so basically would "bumpRevision = !explicitRevision" do that in the last block?
<fwereade> dimitern, doing it first of all seems neater -- we can have an message tailored to its explicitness
<dimitern> fwereade: ah, you want an error as well, ok so it looks to me it's likely to be the last check before setting bumpRevision to true
<rogpeppe> dimitern: bump-revision = charm is local and revision is not explicit
<rogpeppe> dimitern: ?
<fwereade> dimitern, if we don't hit that, the existing "already running latest" bit makes more sense
<dimitern> fwereade: if it's first, we won't handle the case when we cannot bump anyway - be it cs: url or a bundle
<fwereade> dimitern, if it's first we give the best error message for the situation, I think
<dimitern> fwereade: so "already running latest charm cs:blah-12" is a better error?
<fwereade> dimitern, well "latest" is a problem if it's an explicit revision
<fwereade> dimitern, it only becomes appropriate once we've eliminated the explicit case, at which point we can be sure the new url is the latest one
<dimitern> fwereade: then (and i though that before as well) these 3 error cases should be different
<dimitern> fwereade: having the same msg 3 times seems confusing
<fwereade> dimitern, could very well be, it's always a good thing to improve the error reporting
<dimitern> fwereade: how about "already running latest  charm X: - explanation" and the explanation will be different in each cas
<dimitern> case
<fwereade> dimitern, "latest" doesn't always apply
<fwereade> dimitern, "already running specified charm" and "already running latest charm" seem to cover reasonably well, but a 3rd "cannot bump revision of local charm bundle" might be reasonable
<dimitern> fwereade: ok then "cannot upgrade to charm X: - [explicit revision matches latest|not a local repository|not a directory]" >
<dimitern> ?
<fwereade> dimitern, the "latest" and "explicit" cases are different
<fwereade> dimitern, we never even check latest when there's an explicit revision
<dimitern> fwereade: ok, agreed - but i still think the 2 cases for latest could be reported differently
<fwereade> dimitern, "already running latest charm" is way more important then "not local" or "not dir" -- the local bumping is a special case that we only get out when we;ve eliminated reasons not to even try
<fwereade> dimitern, I would be fine differentiating them
<rogpeppe> fwereade: those error messages sound good to me
<dimitern> fwereade: ok, so with these changes it should be good to land then?
<fwereade> rogpeppe, dimitern: the "charm X" is particularly a problem in a wanted-to-bump-revision context
<fwereade> rogpeppe, dimitern: that's kinda a side effect of PutCharm's revision-changing
<dimitern> fwereade: i'm not following you there i'm afraid
<rogpeppe> dimitern: +1
<dimitern> fwereade: you mean the wording or the logic?
<fwereade> rogpeppe, dimitern: "cannot upgrade to charm X" when we wouldn't actually be upgrading to that revision
<fwereade> rogpeppe, dimitern: I'm fine changing them but I would like them to be more accurate not less
<dimitern> fwereade: totally agree on that
<fwereade> dimitern, I think we're reasonably clear on the logic regardless
<rogpeppe> mgz: stupid question: what's the accepted way of creating the initial branch for a project?
<rogpeppe> mgz: i've created lp:godeps, but i've tried bzr push lp:godeps/trunk; bzr push lp:~rogpeppe/godeps/trunk; bzr branch lp:~rogpeppe/godeps/trunk and others
<fwereade> dimitern, so with that fixed I'd gladly LGTM with maybe a suggestion for different messages
<rogpeppe> niemeyer: ^
<mgz> you push it to a n normal location, then set that as the trunk in launchpad
<mgz> so, eg, lp:~juju/godeps/trunk or lp:~gophers/godeps/trunk or something
<mgz> rogpeppe: ^
<rogpeppe> mgz:
<rogpeppe> % bzr push lp:~rogpeppe/godeps/trunk
<rogpeppe> bzr: ERROR: At lp:~rogpeppe/godeps/trunk you have a valid .bzr control directory, but not a branch or repository. This is an unsupported configuration. Please move the target directory out of the way and try again.
<rogpeppe> mgz: how do i move the target directory out of the way when it's on a remote system?
<mgz> heh, what have you been doing...
<mgz> sftp
<rogpeppe> mgz: i've no idea :-)
<dimitern> fwereade, rogpeppe: ok, it's done, please take one last look (added a test for the case we discussed) - https://codereview.appspot.com/8540050
<fwereade> dimitern, on reflection I'd be fine with passing a local bundle into PutCharm with bumpRevision set and relying on the interior error handling
 * rogpeppe has never heard of sftp before
 * fwereade looks
<mgz> ...as in, as a thing, or as a mechanism for doing stuff with launchpad? :)
<rogpeppe> mgz: both
<rogpeppe> mgz: i presume the interface is similar to good ol' ftp
<mgz> yup, just connect to bazaar.launchpad.net as your launchpad user name
<dimitern> fwereade: let's not do that - it's better to be explicit and report a better error i think
<mgz> launchpad provides a pretty locked down environment
<rogpeppe> mgz: http://paste.ubuntu.com/5616423/
<rogpeppe> mgz: is it not possible to see what's there?
<mgz> nope
<mgz> you just need to go to the right place
<rogpeppe> mgz: ah. which is?
<fwereade> dimitern, "cannot increment revision of charm blah: not a directory seems pretty good to me actually"
<fwereade> dimitern, s/directory/directory"/
<dimitern> fwereade: that's only for the last error msg?
<fwereade> dimitern, but, as you wish, LGTM
<fwereade> dimitern, yeah
<mgz> looking to see if anyone has written up a guide...
<fwereade> dimitern, just so we don't bother .(*charm.Dir)ing twice
<dimitern> fwereade: ok, cheers, will do
<mgz> rogpeppe: can't find one
<mgz> rogpeppe: basically, `cd ~YOURUSERNAME/PROJECT/BRANCHNAME
<rogpeppe> mgz: is there really no other way to get lp's knickers out of this twist?
<mgz> then inspect the .bzr dir
<mgz> once you're in a location, you can poke around
<mgz> there's just no root filesystem
<fwereade> dimitern, rogpeppe: if either of you have a moment for https://codereview.appspot.com/9015043 I would appreciate it
<rogpeppe> mgz: so "cd ~rogpeppe" doesn't work but "cd ~rogpeppe/godeps/trunk" does. wonderful.
<dimitern> fwereade: i started looking at it, but got distracted - i'll take a look shortly
<rogpeppe> mgz: any way of doing an rm -r ?
<rogpeppe> mgz: other than manually recursing
<rogpeppe> mgz: 's'ok - it's done
<mgz> you can use bang
<rogpeppe> mgz: as in rm! ?
<mgz> hm, no, but there is some mv mechanism
<rogpeppe> ha, better and better
<mgz> you can also just delete the branch via the launchpad website if you don't care *why* it went wrong, generally
<rogpeppe> %  bzr push --use-existing-dir lp:~rogpeppe/godeps/trunk
<rogpeppe> bzr: ERROR: Server sent an unexpected error: ('error', 'xmlrpclib.Fault', "<Fault -1: 'OOPS-5682d217c06be19eecf72b1b1b0eb6a4'>")
<mgz> cute
<rogpeppe> mgz: hmm, i'll try that
<rogpeppe> mgz: i seem to be digging deeper and deeper here when i just wanna push a branch
 * fwereade sighs at the --config bug with mild despair
<fwereade> ok, I'll be back much later to talk to wallyworld_, but I'm off for now
<rogpeppe> mgz: success, finally.
<rogpeppe> mgz: note to self: always create branch *before* creating project.
<mgz> rogpeppe: that oops:
<mgz> 17:50 < wgrant> mgz: Looks like random long response times
<mgz> 17:50 < wgrant> Possible network/machine glitch.
<mgz> 17:50 < wgrant> eg. a SELECT * FROM Person WHERE id = foo; took 7s...
<rogpeppe> mgz: hmm.
<rogpeppe> mgz: it left me with the same situation i had before (.bzr dir but no branch)
<mgz> ...what are you doing.... you have a local branch with committed changes as cwd, right?
<mgz> any other oddness like cobzr involved?
<mgz> looks like a valid branch with zero revisions to me
<mgz> commit a readme or something
<rogpeppe> mgz: no i was trying to push an initial branch with no commits
<rogpeppe> mgz: which i now find is not possible
<mgz> well, it's possible but not useful
<rogpeppe> mgz: you can't diff against it
<mgz> also makes certain things uselsss
<mgz> right, can't merge or diff with it
<rogpeppe> mgz: i want the initial codereview CL to be everything i initially add
<rogpeppe> mgz: why is no commits a special case?
<rogpeppe> mgz: i guess i can commit --force
<mgz> or you can just commit a readme
<rogpeppe> mgz: or add an empty file or something
<mgz> or a license, which you need to host on launchpad anyway
<rogpeppe> mgz: i've already selected a licence
<rogpeppe> mgz: but perhaps i need it in the repo too
 * rogpeppe could get quite frustrated
<mgz> you can also do a cl with whatever
 * rogpeppe has reached end of day and has to go and build more raised vegetable beds.
<mgz> bed well.
<rogpeppe> mgz: will do. if you want to have a look at this, it needs more tests and more code, but this is the biggest part, i think: https://codereview.appspot.com/9016043
<mgz> sure.
<rogpeppe> mgz: ta
<rogpeppe> g'night all
<andrewsmedina> niemeyer: hi
<niemeyer> andrewsmedina: Heya
<andrewsmedina> niemeyer: congrats for the pure go goyaml version :)
<niemeyer> andrewsmedina: Just saw your mail, thanks for the tests there, and glad everything is working fine
<andrewsmedina> niemeyer: it works fine on go 1.1, but it brokens on go 1.0
<andrewsmedina> niemeyer: https://drone.io/github.com/globocom/config/1 :(
<niemeyer> andrewsmedina: Oho
<niemeyer> andrewsmedina: Let me fix that right now, hold on
<andrewsmedina> niemeyer: thanks :)
<fss> niemeyer: panic("unreachable") all the things
<niemeyer> Good timing
<niemeyer> andrewsmedina: Sorry, network was off right then
<niemeyer> Now using the 3G backup
<niemeyer> andrewsmedina: Fixes were pushed. Can you please pull and try again?
<andrewsmedina> niemeyer: it works!
<andrewsmedina> niemeyer: thank you!
<niemeyer> andrewsmedina: Woohay!
<niemeyer> andrewsmedina: np, thanks for the note
<andrewsmedina> niemeyer: what are the next steps? can I help you?
<niemeyer> andrewsmedina: You're already helping in the most important way. There are lots of small things we could do, but the only critical thing right now is testing.
<niemeyer> andrewsmedina: Everything else can be done in small increments over time
#juju-dev 2013-04-30
<rogpeppe> fwereade: i think it's quite straightforward actually
<rogpeppe> fwereade: none of the tools-choosing logic is in the provider currently
<fwereade> rogpeppe, the tools change? it's probably not *that* big I agree
<fwereade> rogpeppe, but it's another environ thing the api needs to know/care about and I'm not currently sure I know how that stuff works
<rogpeppe> fwereade: i think it's just a matter of adding some kind of canary tag to an agent's info in the state.
<rogpeppe> fwereade: if the api just provided a way to list the available tools, that would be sufficient, i think
<fwereade> rogpeppe, I'd rather be able to ask a machine directly what it should be running really
<rogpeppe> fwereade: essentially we'd be making environs.FindAvailableTools call a method on state rather than on the provider (it might even take an interface so it could work on both)
<rogpeppe> fwereade: i dunno, i don't mind much
<fwereade> rogpeppe, we could indeed do it that way too, I think I'd generally  prefer to keep as much of that sort of work behind the API as possible
<rogpeppe> fwereade: i quite like the idea that an agent can choose how to upgrade itself.
<fwereade> rogpeppe, maybe it has advantages I'm not seeing
<rogpeppe> fwereade: it's more flexible and it's very similar to the current system, so less churn. it may be more flexible than we need, i dunno.
<rogpeppe> fwereade: i can see the upgrade path more clearly in my head this way
<rogpeppe> fwereade: but that's probably just me being fuzzy headed :-)
<fwereade> rogpeppe, we'll see how it looks when we actually implement it, that usually forces an extra dose of perspective
<rogpeppe> fwereade: indeed
<rogpeppe> fwereade: i'm hoping we can avoid too many semantic changes as we move towards using the API - they can happen later, i think.
<fwereade> rogpeppe, agreed in principle at least, I just don't think we can get away with *none*
<rogpeppe> fwereade: we'll see, i guess.
<rogpeppe> fwereade: i'm hoping for none, but there will probably be some necessary.
<ahasenack> hi guys, how do you update all the branches that go get fetched and placed in $GOPATH/src/*? go get doesn't seem to update
<ahasenack> you just go into each one and bzr pull / git pull / hg whatever?
<dimitern> ahasenack: either that, or use go get -u project-import-path/... - will fetch and update the project and all dependencies
<ahasenack> dimitern: ah, cool, -u is what I was looking for, thanks
<rogpeppe> "The engineers onside have advised that due to the severity of the damage they have to supply a revised fix date. They have confirmed that it is now 03/05/2013."
<rogpeppe> looks like my internet is going to continue to be knackered for a while yet then
<rogpeppe> it will have been dead for two weeks tomorrow :-(
<TheMue> rogpeppe: at least you'll have hope that it is fixed when we're returning from oakland
<mgz> good thing you won't be there next week rogpeppe
<rogpeppe> that is true
<rogpeppe> not sure carmen will be very happy with no skype calls tho'
<rogpeppe> not to mention no bbc iplayer :-)
<mgz> :)
<TheMue> :)
<rogpeppe> my phone has been a very busy gateway these last weeks
<rogpeppe> mobile hotspot ftw
<rogpeppe>  if anyone fancies a non-juju review, here are the first two branches of the new godeps tool: https://codereview.appspot.com/9016043/ https://codereview.appspot.com/9047043/
<hazmat> rogpeppe,  to avoid the working copy not pointed to master.. ideally you could detect origin/remote to see if its a non default checkout/branch, also a IsModified to check working copy for changes, abort in both cases
<mgz> hazmat: right.
<hazmat> fwiw, here's my take atm on a similiar problem, i don't have the branch/origin detection yet though. IsModified was  little tricky with bzr non current revs via the cli, i ended up using it as a lib.. http://bazaar.launchpad.net/~hazmat/juju-deployer/refactor/view/head:/deployer.py#L392
<rogpeppe> hazmat: in a call
<rogpeppe> hazmat: does it really matter if the working copy isn't pointed to master?
<mgz> rogpeppe: yes, as we were talking about yesterday
<mgz> you can make work-in-progress hard to recover (and probably without much sign it's been unlinked)
<rogpeppe> fwereade: is the relation-list CL fixing a bug? it seems to me the change just removes an optional argument. is that right?
<fwereade> rogpeppe, it drops the arg and uses a flag insteam
<fwereade> er instead
<rogpeppe> fwereade: oh yes, i'd missed the fact it wasn't using Var before
<rogpeppe> mgz, hazmat: istm that if we make it easy for the user to restore things as they were (for instance by printing the godeps before the modification) then it's not too bad a problem
<mgz> that doesn't help overwriting the head of someone's in-progress feature branch
<rogpeppe> mgz: we wouldn't overwrite any work that couldn't be restored easily
<mgz> rogpeppe: try it.
<rogpeppe> mgz: ok, i will. what's the exact scenario you have in mind, so i can try it?
<mgz> `bzr switch -b feature && bzr commit --unchanged -m "Important work, really" && ???`
<mgz> we're pretty careful about not chucking stuff away, but you can easily confuse people
<rogpeppe> mgz: what work will be lost there if we do "bzr update -r revid:something" ?
<mgz> the question is, what steps do you do to restore your feature branch?
<hazmat> rogpeppe, in the absence of certainity refuse the temptation to guess, if its modified or non pointing to the origin, bailing is the simplest correct thing to do
<rogpeppe> hazmat: i'm just wondering about the "non pointing to the origin" thing here. what's the "origin" when you're using git for example?
<mgz> (answer: `bzr update` and perhaps `bzr revert` if there were conflicts, but you need to notice and do those steps to recover,
<mgz> and, it's different for each vcs)
<hazmat> rogpeppe, default remote master
<rogpeppe> mgz: will "bzr update revid:foo; bzr update revid:bar" not always end up with revid:bar ?
<mgz> yes, but you don't know the state of the tree after doing those two commands
<rogpeppe> mgz: insert -r's as appropriate
<mgz> (the first will helpfully fail if the branch doesn't actualyl contain the revision at least)
<rogpeppe> mgz: oh. i thought the point of "bzr update -r revid:x" was to change the tree to update to revid:x.
<mgz> also, git does GC and things, so I don't think you can trust a dangling head will persist for ever
<rogpeppe> mgz: what other state might the tree end up in?
<rogpeppe> mgz: that's an interesting point.
<rogpeppe> mgz: although hopefully it won't be a dangling head, right?
<rogpeppe> mgz: it should be part of the history, and if it's not you should be able to fetch it with go get -u.
<rogpeppe> mgz: oh, i see
<rogpeppe> mgz: you mean the user's dangling head
<rogpeppe> mgz: hmm
<mgz> rogpeppe: my feature branch removes file X. trunk changes file X. you update my feature branch to trunk... what is X? you update again back to the head rev from the feature branch... what is X?
<rogpeppe> mgz: doesn't a given revid specify file X absolutely?
<mgz> bzr answer: conflict, you have both varients present in the tree and need ot resolve (or use `bzr revert` to just wipe state back to the current selected rev)
<mgz> sure, but update is really a three way merge when history has diverged
<mgz> it's not a "wipe everything and switch to this"... because that's less safe (and has a different spelling)
<rogpeppe> mgz: naively, i'd have thought that X after updating to trunk would be X as changed in trunk (newly created); updating back to the head rev would remove it again.
<rogpeppe> mgz: but you're saying that's not the case?
<mgz> nope.
<rogpeppe> mgz: this destroys my faith in vcs's :-)
<mgz> it should help encourage you, otherwise your un-landed changes would be much easier to accidentally lose
<mgz> (and are I think, in one spelling of this people use with git)
<rogpeppe> mgz: i always thought that, given a clean tree, rm -r *; bzr update revid:x; would be semantically the same as bzr update revid:x (but faster)
<rogpeppe> mgz: i still don't quite see where the conflicts come in
<mgz> it is not.
<rogpeppe> mgz: pardon my lack of vcs-fu
<mgz> hg handles the same situation by refusing to update to a non-descendant
<rogpeppe> mgz: or ancestor?
<fwereade> rogpeppe, dimitern: a thought about charm config changes on upgrade-charm
<dimitern> fwereade: yeah?\
<fwereade> rogpeppe, dimitern: when a setting type changes, is it *really* any better for us to barf than to just ignore it?
<fwereade> rogpeppe, dimitern: if we ignore it we just get the default
<rogpeppe> fwereade: i think so
<dimitern> fwereade: sgtm
<rogpeppe> fwereade: it's an incentive for charm authors to keep things compatible by only adding compatible settings
<fwereade> rogpeppe, but *actual* compatibility is more than just the type
<rogpeppe> fwereade: sure. the new charm must declare all the old settings, right?
<fwereade> rogpeppe, nope, we just discard ones that are now irrelevant
<fwereade> rogpeppe, maybe *that* is crazy
<rogpeppe> fwereade: so you're proposing that there are no charm settings compatibility rules at all?
<fwereade> rogpeppe, but I don;t *think* so
<rogpeppe> s/there are/there should be/
<fwereade> rogpeppe, the only one that was ever mooted was the type one, and I'm struggling to see what value it delivers
<fwereade> rogpeppe, delete is fine and add-new is fine
<rogpeppe> fwereade: i thought it was stronger than that. i think that the new charm should declare all the old settings and they should be of the same type.
<fwereade> rogpeppe, so if a newer version of a charm adds some smarts that render some option irrelevant, it has to carry them forever?
<rogpeppe> fwereade: yeah. that's compatibility for you.
<fwereade> rogpeppe, I don;t think that's helpful
<rogpeppe> fwereade: i think that settings are a good indicator of compatibility. relations not nearly so much.
<fwereade> rogpeppe, this means that there's never any incentive to make a charm smarter -- once there's a config setting, the charm author either has to support it forever and allow it to override (complex) or just ignore a published config setting (evil)
<fwereade> rogpeppe, the relations *are* a hard constraint
<rogpeppe> fwereade: yes, but loads of very different charms declare the same relations, right?
<fwereade> rogpeppe, allowing a charm upgrade to break an existing relation is I think clearly not ok
<rogpeppe> fwereade: agreed. but my point is that's not a very strong constraint
<fwereade> rogpeppe, yeah, that's inevitable
<fwereade> rogpeppe, --switch has valid use cases, but they are tiny islands of sanity in a great ocean of borken
<rogpeppe> fwereade: i think that specifying settings compatibility makes the islands substantially larger
<fwereade> rogpeppe, it just feels like walling off one tiny area of the borkenness and saying "this is really really bad" should probably be justified a bit more strongly
<fwereade> rogpeppe, what is the benefit of not allowing config settings to be deleted?
<rogpeppe> fwereade: how about an explicit "deprecated" type?
<rogpeppe> fwereade: so you can upgrade to deprecated and deprecated settings can be deleted.
<fwereade> rogpeppe, but we still have to carry "deprecated" forever, and I don't see who's helped by this
<rogpeppe> fwereade: that gives charm authors a path to deleting settings.
<fwereade> rogpeppe, I don't see how it improves on just deleting the settings straight off
<rogpeppe> fwereade: it means we have much greater confidence that we're not upgrading to a totally unrelated charm
<fwereade> rogpeppe, ah, sorry, you're proposing that we validate the two charms' schemas against one another
<rogpeppe> fwereade: isn't that essentially what we're currently doing?
<rogpeppe> fwereade: except we only validate some relations
<fwereade> rogpeppe, no, we just validate the old settings against the new schema as forgivingly as possible, *apart* from the types
<fwereade> rogpeppe, for config
<fwereade> rogpeppe, and for relations, we validate only that the relations the charm is currently in can continue to exist
<fwereade> rogpeppe, --switch is a great big gun that can blow your whole leg off ;)
<rogpeppe> fwereade: here's another much simpler validation scheme: just check that charm.Meta.Name() is the same
<rogpeppe> fwereade: and the relation check too
<fwereade> rogpeppe, previously considered and discarded, the name is just a heuristic
<rogpeppe> fwereade: aren't heuristics exactly what we're playing at here?
<fwereade> rogpeppe, I'm more concerned about --switch not allowing something that's sane than about it allowing something that's insane
<fwereade> rogpeppe, the only sanity I think we can legitimately care about is that which maes our own internal state invalid
<rogpeppe> fwereade: i think that people will play by the rules we write
<fwereade> rogpeppe, everything else is in the hands of the person who hopefully knows well enough what they're doing
<rogpeppe> fwereade: i think some sanity checks are really useful here. it's so easy to type a wrong url
<rogpeppe> fwereade: and the settings compatibility checks seem like a Good Thing to me
<rogpeppe> fwereade: it's not hard to comply with them
<rogpeppe> fwereade: compatibility rules that say you can change something you already support are quite usual
<teknico> hi, I'm debugging the juju-gui charm and trying to manually run the config-get command, where do I get the client id it needs?
<rogpeppe> fwereade: but i think it's perfectly reasonable to say that the charm name is the lynchpin of compatibility
<rogpeppe> fwereade: heuristic or not. (well, we make it non-heuristic by attaching some semantics to it, i think)
<fwereade> teknico, sorry, it's basically impossible... well, not *impossible*, but definitely impractical
<fwereade> teknico, will a CLI get not help?
<rogpeppe> mgz: ah, a non-descendant of the same root, i suppose
<teknico> fwereade, what's a CLI get?
<fwereade> rogpeppe, `juju get myservice [configkey]`
<rogpeppe> teknico: ^
<fwereade> er teknico ^^
<mgz> right, it basically refuses to deal with the three-way merge case (which is reasonable)
<mgz> though it will still screw you over if you have uncommitted changes and don't pass a flag to abort on that too
<teknico> fwereade, yes, it does, thank you
<fwereade> teknico, sweet
<fwereade> rogpeppe, I'm still not following how your proposed checks solve user problems
<fwereade> rogpeppe, I am taking as an idealistic axiom that charms should work correctly with all-default values, which I know is not strictly true, but I have no interest in making it any easier to deploy charms that are broken by default
<fwereade> rogpeppe, we don't have the ability to track a charm's provenance, and I'm not sure we want to disallow the possibility of one charm being a drop-in replacement for another
<rogpeppe> fwereade: does the install hook get called again when a charm is upgraded?
<fwereade> rogpeppe, no
<rogpeppe> fwereade: well then. charms should work correctly when *installed from scratch with all-default values*.
<rogpeppe> fwereade: but starting from the middle, probably not
<fwereade> rogpeppe, what is the value to users in disallowing a --switch to an arbitrary charm? ISTM that we simply *cannot* know for sure that the user is doing something crazy *except* when existing relations do not match
<fwereade> rogpeppe, I feel like --switch is an implicit "I know what I'm doing" to which the only acceptable failure is "that action is impossible" rather than "that action appears to be bloody stupid" ;)
<rogpeppe> fwereade: ISTM that with such a potentially dangerous feature, we should at least have some sanity checking
<fwereade> rogpeppe, on the other side, when we're upgrading from the same base url, I don't see any value to fossilizing the config
<rogpeppe> fwereade: yeah, maybe. i'd like to speak to a real user about this issue.
<Makyo> I'm getting a bunch of panics around mongo when running tests in trunk and out of ideas.  I'm on raring, think that might make a difference?
<rogpeppe> Makyo: paste?
<fwereade> m_3, rogpeppe and I would be interested to know (1) what level of safety you will require around upgrade-charm --switch and (2) your views on how liberal we should be wrt config conversion at upgrade-charm time (with or without --switch)
<rogpeppe> fwereade: the user problem i'm imagining is: juju upgrade-charm --switch wordpress # oops, i meant to type one of the dozen other charms that happened to have db and http relations declared.
<Makyo> rogpeppe, http://pastebin.ubuntu.com/5619482/
<rogpeppe> fwereade: perhaps we should do compatibility checks by default, but provide a --force flag to override them
<fwereade> rogpeppe, I think that if you're using --switch you have a bit of extra responsibility to switch to things that are sane
<fwereade> rogpeppe, hmm, --force-url?
<fwereade> rogpeppe, --force's pre-existence is irritating
<rogpeppe> fwereade: what's the name of the flag currently?
<fwereade> rogpeppe, there isn't one
<fwereade> rogpeppe, it's to support a use case originally reported by m_3
<fwereade> rogpeppe, it's --switch but nothing has been released with that flag
<rogpeppe> fwereade: oh, darn, i'd forgotten about the existing --force
<fwereade> rogpeppe, wrt config changes I am not confident, apart from anything else, that every existing charm is compatible with every version of itself
<rogpeppe> fwereade: it's highly unlikely to be.
<fwereade> rogpeppe, so additional checking breaks upgrades that worked before
<fwereade> rogpeppe, we could probably get away with that ofc
<rogpeppe> fwereade: but we can probably make the default (non-forced) behaviour reasonably safe
<fwereade> rogpeppe, what is unsafe about using default values for removed or incompatible settings?
<rogpeppe> fwereade: just that settings are the only decent indicator that we have of whether a charm really is the same
<fwereade> rogpeppe, charms *ought* to work with default settings, and if they don't I think I am comfortable calling it a charm bug
<fwereade> rogpeppe, if it's from the same base charm url we have a pretty good indicator already I think
<fwereade> rogpeppe, and --switch is a big-boy tool
<rogpeppe> fwereade: charms will probably work with default settings, but if you upgrade to a random charm, it will probably not work, because its install hook has never been called
<fwereade> rogpeppe, right, so be careful with --switch
<rogpeppe> fwereade: we seem to have switched our usual roles here :-)
<rogpeppe> fwereade: what's wrong with using the charm name again?
<fwereade> rogpeppe, I was pondering that, and I *think* I'm consistent: sync-tools is a *necessity* for a broad class of users, while --force-url (which I think is better than --switch) is a nice feature for a small group of particularly technical users
<fwereade> rogpeppe, what does it get us? in the normal case it's redundant, in the --force-url case it's an arbitrary safety guard covering part of one blade on an infernal machine ;)
<rogpeppe> fwereade: nice features like that have a tendency to be picked up and used by everyone :-)
<rogpeppe> fwereade: yeah - it's like a safety catch
<rogpeppe> fwereade: it's easy to disable, but useful nonetheless
<fwereade> rogpeppe, on one barrel of a terrifying scifi multicannon ;)
<fwereade> rogpeppe, if the nuke doesn't get you the CAM dusting will
<rogpeppe> fwereade: that bad?
<rogpeppe> fwereade: why do we have a charm name anyway?
<fwereade> rogpeppe, well, yeah, there is an arbitrarily large number of ways a charm can be irreparably broken despite all our checks here
<rogpeppe> fwereade: i see the mooted charm name check as a symbolic check more than anything
<rogpeppe> stupid python question: how do i easily find the source for a given python package?
<rogpeppe>  i'm about to do find / -name pythonpackage, and i'm sure there's a better way
<benji> rogpeppe: import foo; foo.__file__
<benji> (it will normally return a .pyc, just drop the "c" to get the source)
<ahasenack> or foo.__path__ if it's a module
<rogpeppe> benji: bingo, thanks
<benji> my pleasure
<rogpeppe> mgz, hazmat: i'm thinking that my proposed godeps -u is at least no more dangerous than go get -u, is it?
<rogpeppe> mgz, hazmat: leaving aside for the time being whether that's sufficiently safe or not
<mgz> go get -u is indeed a similar thing.
<rogpeppe> mgz: ha, trying your suggested scenario from before: http://paste.ubuntu.com/5619713/
 * rogpeppe has reached eod
<rogpeppe> g'night all
<mgz> rogpeppe: right, it's not a sane thing to do with that bzr model
<marcoceppi> So, does juju-core have the ability to upgrade the current environment in place?
<mramm> marcoceppi: it does
<mramm> marcoceppi: it currently is limited to minor version upgrades (1.10 to 1.11 for example)
<mramm> marcoceppi: but we will be adding major version upgrades in one of the very next minor version releases
<marcoceppi> mramm: at a high level, how does it work? is there a juju command for it - or is it just automagic
<mramm> there is a command
<mramm> so you get to control when the update happens
<marcoceppi> cool, good enough for me
<marcoceppi> thanks
<m_3> fwereade: sorry I missed the question about --switch earlier... conferencing atm
<fwereade> m_3, no worries
<fwereade> m_3, if you have leisure to read a spot of the conversation that went before, for context, I would be interested to know your thoughts
<m_3> I'm wondering if specific charm version should be shown and then specified, but lemme read a bit more carefully
<fwereade> m_3, we have a separate --revision, which just updates to the specified revision of the charm a service is already using
<fwereade> m_3, we probably don't output "upgraded from %s to %s", and surely should
<m_3> fwereade: yes... I'd almost conider the revision part of the charm namespace
<fwereade> m_3, not sure what you mean by namespace -- but whenever we describe a charm we should certainly include the rev
<m_3> fwereade: but anyways... in answer to the orig question, I think that if the user is specifying both the service and the namespace of the new charm explicitly
<m_3> then a '--force' variant is not nec
<m_3> fwereade: namespace would mean cs:mysql:5 or local:mysql:37
<m_3> but not really important
<m_3> fwereade: should work to have `juju upgrade-charm --switch local:mysql my_database_service`
<fwereade> m_3, that's the use case I had in mind
<m_3> and then the equiv `juju upgrade-charm --switch --revision 5 cs:mysql my_database_service`
<m_3> the "cs:mysql" could be "mysql" of course
<fwereade> m_3, the idea is that charm urls are inferred just as they would be at deploy time
<m_3> and --revision optional
<m_3> ack
<fwereade> m_3, we currently have it that --switch and --revision are mutually exclusive
<fwereade> m_3, seemed simpler all round
<fwereade> m_3, if you're switching, specify the url properly :)
<m_3> but 1.) still service-specific, and 2.) the presence of '--switch' is enough checking imo
<fwereade> m_3, if you skip revision, you get latest, which is probably what you meant
<m_3> fwereade: hmmm.... as to mutially exclusive...
<fwereade> m_3, --switch foo-1 --revision 2
<m_3> oh, so I only am able to switch from local to the latest store rev?
<fwereade> m_3, no -- you specify the target charm as precisely as you care to in switch
<m_3> but how can I specify target charm revision?
<m_3> other than with '--revision'
<m_3> oh, just `juju upgrade-charm --revision 5 mysql my_database_service`
<fwereade> m_3, `--switch foo-1` will be expanded just as it would in deploy to cs:precise/foo-1
<fwereade> m_3, I have clearly not explained well at all
<fwereade> m_3, no args: latest with same base url
<m_3> sorry, I'm a bit confused as to how one switches _back_ to the store-based charm
<m_3> scenario... `juju deploy mysql` gives me cs revision 5 say
<m_3> find a bug... update local:mysql.. then switch with
<m_3> sorry... lemme fix the previous
<m_3> scenario... `juju deploy mysql mydb` gives me cs revision 5 say
<m_3> now
<m_3> `juju upgrade-charm --switch local:mysql mydb`
<m_3> the fix didn't work... so I revert
<m_3> but I'd like to revert to mysql '4' in the cs
<m_3> what would the syntax for this be?
<m_3> (if the answer is you can only revert back to the latest in the charm store, that's fine)
<m_3> but does that "switch" require a '--switch'?
<fwereade> m_3, `--switch mysql` for the latest; `--switch mysql-127` for a specific rev
<fwereade> m_3, similarly `--switch local:mysql` or `--switch local:mysql-17`
<fwereade> m_3, balls, thought of a bug, we might infer the wrong series
<fwereade> m_3, we should take it from the previous charm url, not from the env
<fwereade> dimitern, ^
<m_3> fwereade: ack... perfect
<m_3> fwereade: also not effecting 'myotherdb' service happily singing along on cs:mysql-5, right?
<fwereade> m_3, shouldn't do -- there is the usual hairiness around local repos but I think it's near-enough orthogonal
<m_3> fwereade: ack... thanks!  (and thanks roger)
<thumper> mramm: did you plan some 1:1 calls?
<mramm> yea
<mramm> I do
<mramm> I just found out an hour ago that I need to get all the blueprints in by tomorrow morning
<mramm> so I am madly working on that stuff right now
<mramm> but I should probably take a break and schedule the 1 on 1's or the tyranny of the urgent will get me ;)
<thumper> :)
<mramm> thumper: added one for you
<mramm> already talked to dave on sunday night, but going to touch base again if he's around
 * thumper nods
<davecheney> mramm: ping, you sent me a meeting invite for 45 mins ago
#juju-dev 2013-05-01
<mramm2> davecheney: sorry, I missed you at the meeting time, and then went to dinner
<davecheney> mramm2: wanna talk now ?
<mramm> davecheney: sure
<mramm> I don't have much of an agenda, just touching base
<mramm> since we talked sunday
<davecheney> mramm: yeah, i don't have anything to add
<mramm> cool
<mramm> then we can skip it
<davecheney> is wed 8:30 our regular time ?
<mramm> and I'll let you keep up the good work ;)
<mramm> well, not regular
<davecheney> cool
<mramm> I just scheduled it after my talk with Tim
<davecheney> i'm sorta behind today agfter LP was out all yestedday
<davecheney> no lp == no ppa == no boostrap :(
<mramm> gotcha
<mramm> no worries
<davecheney> william has asked for one more load test run, with the presence code commented out
<mramm> I sent an e-mail this evening about the blueprint stuff (to the whole team)
<davecheney> given we have to give the load test resources back on the 3rd
<mramm> davecheney: yea
<davecheney> it's probably worthwhile fitting it in this week
<davecheney> re: email, read and understood
<mramm> davecheney: agreed
<davecheney> jolly good, as you were
<davecheney> mramm: one thing, re my email about tracking patches to backport
<davecheney> i don't think that page captures it totally
<davecheney> which means everyone is keeping their own imcomplete list
<davecheney> which means we'll probably miss something for the next 1.10 point release
<davecheney> which is bad
<mramm> Yea
<mramm> I think we should talk about this at the meeting
<mramm> I'm not sure everything needs to be backported
<davecheney> mramm: that is fine
<mramm> 1.12 will probably be less than a month away
<mramm> so it isn't a huge deal
<mramm> but we should *know* what is being backported
<mramm> and *know* what is not
<davecheney> ^ exactly
<mramm> and have *good reasons* for which is which
<davecheney> mramm: maybe lp/bzr has good tools to do this automatically
<davecheney> i don't konw
<davecheney> a google doc is a lofi solution
<mramm2> yea
<mramm2> there is a target series thing in lp
<mramm2> not that I know if it makes sense
<mramm2> anyway, I think I'm going to bug out for the evening.... I'll look into the problem and see if I can find a solution to recommend before the meeting.
<thumper> davecheney: I'm heading out to take the kids out for exercise.  I'm taking my laptop to work on blueprints/specs but won't necessarily be online
<thumper> will have mobile
<davecheney> thumper: no worries
<davecheney> understood
<davecheney> m_3: ping
<davecheney> i'm doing another load test run
<davecheney> fwereade: wanted to see what it looks like with the agent presence facility turned off
<hazmat> is there a way to forcibly remove a unit, i hit another case of a wedged unit in  bug 1175031
<_mup_> Bug #1175031: wedged unit state error but can't be resolved <juju-core:New> <https://launchpad.net/bugs/1175031>
<rogpeppe> mornin' all
<dimitern> guys, my network connection is rather flaky since yesterday evening - coming on and off frequently, and generally being unstable
<rogpeppe> dimitern: welcome to the club!
<dimitern> rogpeppe: thanks :| i hope it'll get better, but sometimes is almost unusable
<rogpeppe> dimitern: i'm lucky that my mobile internet bandwidth is pretty good here, so i've got an alternative
<dimitern> so, with my internet troubles and considering it's a public holiday today here, i'll be off
<dimitern> see you all tomorrow
<mgz> later dimiter
<rogpeppe> fwereade: ping
<rogpeppe> that annoying feeling when you know you've implemented something in the past, but can't remember where it is, or even what you called it.
<rogpeppe> ha! found it
<mgz> jam: standup?
<jam> mgz: logging in now
<rogpeppe> mramm, fwereade: kanban?
<mramm> still in another meeting
<mramm> be there in a couple of min
<fwereade> rogpeppe, mramm: sorry, public holiday, I should have said
<rogpeppe> fwereade: np
<fwereade> rogpeppe, are you aware of a reason juju set doesn't use statecmd?
<rogpeppe> fwereade: possibly. let me check.
<rogpeppe> fwereade: possibly because it worked as well with methods on state.Service
<fwereade> rogpeppe, hmm, ok, I might have to do something about that -- Service.SetConfigYAML doesn't feel like a sane thing at all to me
<rogpeppe> fwereade: it SetConfigYAML may well have preceded statecmd, i'm not sure
<rogpeppe> fwereade: hmm, no it didn't
<rogpeppe> i wish i knew how to interpret the revnos printed by bzr blame
<rogpeppe> fwereade: you didn't seem to think it too bad here: https://codereview.appspot.com/8626043/diff/5001/state/service.go#newcode739
<rogpeppe> fwereade: but i'd be ok seeing it move if you do mind the churn
<rogpeppe> s/do/don't/
<rogpeppe> fwereade: BTW you might be interested in this. it's a JSON representation of the API: http://paste.ubuntu.com/5622552/
<rogpeppe> fwereade: there are quite a few possible applications for the info
<rogpeppe> fwereade: i have to jump on my bike and go back from town
<rogpeppe> fwereade: might pop my head in later
<rogpeppe> fwereade: if not, see ya tomoz
<thumper> morning
<thumper> hi fwereade (if you are still around)
<fwereade> thumper, hey, more or less; sorry, long time no speak
<thumper> fwereade: that happens
<thumper> I was just thinking this morning that it had been a while since we had talked
<fwereade> thumper, how's it going?
<thumper> bit of a headache this morning, but not sure if that is lack of coffee or from being punched in the head
<thumper> :)
<thumper> it has been quite a slow week as I have had to deal with kids on school holidays
<thumper> i'm trying to get some meat into some of the blueprints for next week
<fwereade> thumper, ouch
<fwereade> thumper, I am much exercised by service config setting
<thumper> although it is the ribs that is hurting right now
<thumper> what do you mean by service config setting?
<fwereade> thumper, juju set
<thumper> right, what's the problem?
<fwereade> thumper, egregious incompatibility in --config flag
<fwereade> thumper, but on looking at it, ow, my brain
<fwereade> thumper, and, fair enough, the --config flag seems reasonably obvious
<fwereade> thumper, *but* it appears to have been implemented without actually checking what python did
<thumper> fwereade: I have some sekrit work to make jcastro happy... (I implemented switch)
<fwereade> thumper, <3
<thumper> fwereade: oh, and I guess that is a problem
<fwereade> thumper, yeah -- and there's a SetConfigYAML method on Service which I reviewed with by-far-not-enough whining :/
<fwereade> thumper, and is in hindsight profoundly wrong
<thumper> I've been half thinking recently about things like pubsubhubbub
<fwereade> thumper, but forgivable because the SetConfig implementation is *itself* profoundly icky, because it sets typed data as a map[string]string, and gaah
<thumper> heh
<fwereade> thumper, so with enough kicking that can be collapsed into a SetConfig(map[string]interface{}) that at least has the data of the right type, and validates it
<thumper> my juju mind has been somewhat split between: logging, ha, containerisation, status announcements
<fwereade> thumper, and with slight shame tacking the weird YAML behaviour (not at *all* obvious) onto charm.Config, and bleh
<fwereade> thumper, I haven't really managed to settle cleanly on a high-level topic at all, I have to spend tomorrow on blueprints
<thumper> fwereade: does it really need a "step back and reevaluate"?
<thumper> and take a fresh look at roles and responsibilities?
<fwereade> thumper, the config stuff?
<thumper> yeah
<fwereade> thumper, it was somehow implemented wrong and without tests, so I kinda needed to fine-tooth comb it to figure out what was going on
<thumper> ouch
<fwereade> thumper, it seems to be nonstandard in several ways
<fwereade> thumper, each individually forgivably so, but end-to-end it's very confusing
<thumper> it becomes a tough call
<fwereade> thumper, that was the stuff I was trying to dump on you when you first joined, all I had then was a slight intimation of it being weird and needing a look
<thumper> heh
<thumper> yay, dodged a bullet there
<fwereade> thumper, in this case I'm fairly comfortable choosing to fix rather than to layer on what feel like further hacks
<thumper> yeah, good call
<fwereade> m_3, ping
<fwereade> thumper, anyway, sorry, you were saying much more interesting things
<thumper> :)
<thumper> I had a good chat with the BDFL of docker
<fwereade> cool!
<thumper> he is CEO of dotcloud
<thumper> unfortunately he is on holiday for two weeks starting Monday
<thumper> but is going to make sure someone from their office goes to the gophers meetup
<thumper> docker are very keen to work with us
<thumper> to make sure we get what we need
<thumper> anyway, I've added quite a bit to the containerisation spec
<thumper> and I was going to start on HA spec today
<fwereade> awesomesauce
<thumper> however I have some ideas that don't fit into specs
<thumper> in particular rationalizing logging
<thumper> and having a way to publish status changes
<thumper> a la pubsubhubbub type thing
<thumper> just a publish/subscribe hub
<thumper> if we had the api server and other commands publish on the bus,
<thumper> we could potentially have something like 'juju observe'
<thumper> very hand wavey right now
<thumper> as I've spent exactly 5 minutes actively thinking about it
<thumper> rest of the time it has been quietly perculating in the hind-brain
<bigjools_> g'day
<hazmat> thumper, that sounds nice.. api server publishing messages, instead of state txn tail
<thumper> hazmat: it would be nice, but I've not put too much thought into it yet
<thumper> hazmat: we could then have a log listener that takes the events and logs them
<thumper> hazmat: but we could also hook in other listeners
<hazmat> thumper, yes.. aka audit trail
 * thumper nods
<thumper> I may also hack up a different logging implementation based on my C++ work in the past
 * thumper has written too many logging systems
<thumper> there would be differences, but I think we could make on that is idiomatic go without too much trouble
<hazmat> thumper, i was trying out a few web api server broadcasting messages for a thought experiment it works well
<thumper> I don't get what you are saying there :)
<thumper> what were you testing?
<hazmat> i ended up on rabbitmq.. after trying hard to with zeromq..
<hazmat> thumper, seeing what a juju api server for internal api would look like
<thumper> ok
<hazmat> thumper,  i ended up using the existing juju-core api for an embeded watch/observer in a new version of juju deployer.. the client side is useful already.. embedded in the server api is nice as stat/audit/rate limit.
<hazmat> thumper, not sure what you mean by logging impl
<thumper> hazmat: have logging categories
<thumper> and be able to set verbosity on a category by category level
<thumper> with sub categories
<hazmat> log hierarchy
<thumper> so set juju.api to INFO, juju.uniter.upgrade DEBUG etc
<thumper> aye
<thumper> hazmat: I had a good one I write for unity
<thumper> hazmat: that is lovely in C++ but won't work in go
<thumper> as no maros, and no object lifetime
<hazmat> i gotta imagine there's something extant already
<thumper> so slight changes needed
<thumper> hazmat: you'd think that...
<thumper> but most of the existing ones are horrible
<thumper> log4cpp was a bad port of log4j
<thumper> and boost logging was fugly
<hazmat> thumper, then again i would have thought the same about flocks
<hazmat> ;-)
<thumper> :)
<AeroNotix> hi guys
<thumper> s'up?
 * thumper waves at davecheney
<AeroNotix> https://github.com/AeroNotix/hpcloud so I've been working on these
<AeroNotix> I'm not super familiar with the launchpad site so I can't see how complete goose is
<AeroNotix> or what features it provides
<AeroNotix> but it seems that we have a very similar design
<AeroNotix> i.e. you have a central Client type which you then embed in submodules
<AeroNotix> from what I gather, at least
<AeroNotix> I am thinking if you would be open to me trying to merge my code base with yours
<AeroNotix> providing you want the features it has
<AeroNotix> I assume you're looking to bind to a lowest common set of available endpoints which most openstack providers... provide
<thumper> AeroNotix: a good person to poke about goose is wallyworld_
<thumper> I don't know anything about it
<AeroNotix> paging wallyworld_
<wallyworld_> hello
<AeroNotix> If you could read the scroll back it'd save me a lot of typing :P
 * wallyworld_ reads
#juju-dev 2013-05-02
<wallyworld_> AeroNotix: without looking closely at your code, it seems you've written an openstack client similar to goose
<AeroNotix> wallyworld_: indeed, that's what I thought at a cursory look through goose
<wallyworld_> you think you have features we don't support yet in goose?
<wallyworld_> we wrote goose generically but with juju in mind
<AeroNotix> I don't know, I can't make heads or tails of that launchpad site to tell the truth
<wallyworld_> so it does what we need for juju but is usable as a separate lib
<wallyworld_> i feel the same about github :-)
<AeroNotix> :)
<wallyworld_> how can i help with launchpad
<AeroNotix> And I'm not certain what are "base" openstack modules
<AeroNotix> I'm assuming CDN/Block/Compute are base
<AeroNotix> object store
<wallyworld_> at the moment, compute, object store and limited image (glance) support, as well as identity (userpass, key pair)
<wallyworld_> if you want the code, easiest to use bzr
<AeroNotix> yeah I will grab the code now
<wallyworld_> bzr branch lp:goose
<wallyworld_> if you need help etc, just ping me
<AeroNotix> will do
<AeroNotix> So Goose is for your needs, are you open to it being extended beyond those and becoming a more comprehensive library?
<thumper> wallyworld_: I'm surprised you didn't say "go get launchpad.net/goose" :-)
<thumper> AeroNotix: I would assume yes, as long as it continues to meet our needs
<wallyworld_> thumper: what can i say, i like bzr
<AeroNotix> ok sounds good
<thumper> wallyworld_: me too :)
<wallyworld_> AeroNotix: go for it :-)
<thumper> hmm lunchtime
<wallyworld_> AeroNotix: pun intended :-)
<AeroNotix> :)
<AeroNotix> ok it's quite late here but I'll check in tomorrow :)
<AeroNotix> night all
 * thumper heads to lunch
<Makyo> join #juju-gui
<Makyo> Yikes, too many windows at once.
<wallyworld_> jam: hi, i have to go to my son's school concert (oh joy) so will miss the weekly meeting but i should be back in time for the standup
<davecheney> Quoting fail
<davecheney> workitems_text: Invalid work item format: &quot;[TODO] Get Go 1.1 into Saucy.&quot;
<davecheney> OK
<rogpeppe> mramm, fwereade: keeps on chucking me out
<rogpeppe> i'll try one more time
<fwereade> blast, forgot appointment, bbiab
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: dimitern | Bugs: 6 Critical, 66 High - https://bugs.launchpad.net/juju-core/
<dimitern> fwereade: when you're here, i have a question
<dimitern> mgz: ping
<mgz> dimitern: hey
<dimitern> mgz: hey, it seems your branch reverting the raring workaround haven't landed yet
<dimitern> mgz: does it mean that's still in the release?
<mgz> yeah, was holding off on it till after release deliberately
<dimitern> mgz: ah, ok
<mgz> I didn't want to patch it back in late, felt more risky that leaving it till we do 1.10.1
<mgz> and the backports thing meant what went in on day 0 was less crucial
<dimitern> mgz: ok, so we do have a workaround for raring in 1.10.0 then
<mgz> and now the release has happened, that's not going to be hit
<mgz> (unless you use a pre-release raring image, rather than the actual release)
<dimitern> mgz: what do you mean?
<mgz> there' no upstart update pending when you start a 13.04 machine
<dimitern> mgz: ah, they backed it out then
<mgz> no, it's just that there's no *update*, it's in the image
<dimitern> mgz: sorry, i still don't get it - the upstart issue we had, due to which the workaround was introduced - is it fixed or not?
<mgz> not yet, but it's only triggered when apt-get upgrade will install a new upstart
<mgz> this isn't the case for the release images, they have the latest upstart
<dimitern> mgz: ah, i see, we're good then
<mgz> and before a new upstart release is SRUed, this bug will get fixed
<jam> mgz: standup?
<mgz> jam: ta
<dimitern> fwereade: ping
 * dimitern bbi3m
<fwereade> dimitern, pong
<dimitern> fwereade: hey, can you take a look at this, to see if i'm heading in the right direction? http://paste.ubuntu.com/5625717/
<fwereade> dimitern, sure
<fwereade> dimitern, looks sane -- but you'll also, I think, want to be passing the charm in so you can extract the local endpoint of each relation and check that the new charm implements it
<fwereade> dimitern, no extra asserts necessary there though, just an error return
<fwereade> dimitern, from the ops POV it looks perfect as it is
<dimitern> fwereade: well, that's a separate card, wasn't sure if i should mix them in the same branch?
<dimitern> fwereade: but i guess the extra code for the endpoints checking is not much
<fwereade> dimitern, I think they're the same task -- checking the relations are the same delivers no value without checking they're sane, while checking their sanity almost demands that we also assert sameness
<dimitern> fwereade: ok then
<fwereade> dimitern, but, hmm
<dimitern> fwereade: should i exclude (new/all) peer relations from the generate ops?
<fwereade> dimitern, exclude new ones -- just work against original ones
<dimitern> fwereade: so as is it's ok - since the new ones won't be there until after setcharm succeeds
<fwereade> dimitern, yeah, but there's something knocking at my brain
<fwereade> dimitern, oh yeah! you want to check len(relations) against doc.RelationCount and return errRefresh or something (and handle that at the top level)
<dimitern> fwereade: ah! yeah, good point
<dimitern> fwereade: will do
<fwereade> dimitern, so I'll leave it to your judgment re 2 CLs or 1
<dimitern> fwereade: ok, thanks
<fwereade> dimitern, you might still have trouble testing that bit in isolation though
<dimitern> fwereade: i already have one failing test (only one) - the one testing new peer relations.. so i'm still figuring out how to test this
<fwereade> dimitern, yeah, bears some thinking about
<dimitern> fwereade: btw, can you print out the itinerary + sprint info in 2 copies for saturday?
<fwereade> dimitern, sure, np
<dimitern> fwereade: cheers
<rogpeppe> fwereade: FWIW this was the document i originally wrote regarding upgrades. It has a footnote on major-version upgrades, but not much. http://paste.ubuntu.com/5625794/
<rogpeppe> fwereade: i think we'd want some agent to be responsible for waiting for all machines to indicate they're ready, and perform the actual process
<fwereade> rogpeppe, yeah, but we also need to make sure that no units or machines are created i the meantime
<rogpeppe> fwereade: that's easy if we know that all agents have halted
<rogpeppe> fwereade: which is what i meant by "all machines to indicate they're ready"
<rogpeppe> fwereade: then we have a single lonely agent carrying out the appointed upgrade tasks
<fwereade> rogpeppe, ok, I think it's actually a little harder than "easy" but I agree it's not the hardest thing we'll ever have to do
<rogpeppe> fwereade: i meant that, given that step that we've already agreed, making sure that no units or machines are created in the meantime falls out naturally.
<rogpeppe> fwereade: that does of course assume the clients communicate through the API
<fwereade> rogpeppe, yeah, it's a bit hard t separate the two issues
<rogpeppe> fwereade: BTW one thing i don't see on the blueprints is the ability to dynamically change the agents running on an instance
<fwereade> rogpeppe, jobs on a machine?
<rogpeppe> fwereade: yeah
<fwereade> rogpeppe, that's something that feels like a bit of a can of worms that we don't strictly *need* at this stage, but I'll bear it in mind
<rogpeppe> fwereade: yes, i'm not sure about it, but it's something worth considering.
<rogpeppe> fwereade: the other significant thing with regart to major-version upgrades that i'm considering is how to do the actual mongo schema migration
<rogpeppe> fwereade: i'm wondering about building special upgrade binaries that know how to transition from one major version schema to another
<rogpeppe> fwereade: then the agent that's responsible for upgrading find the appropriate binary for the upgrade and runs it
<rogpeppe> fwereade: possibly running several in succession if upgrading across several major versions
<fwereade> rogpeppe, not sure how that's any better than just making the first agent of the new version responsible for running whatever series of upgrade methods is appropriate and just available right there in state
<rogpeppe> fwereade: that assumes the agent knows how to upgrade from every version. i *think* it might be nicer to isolate the compatibility code from the main code
<fwereade> rogpeppe, sounds like an awful lot of binaries to download and run, especially since the state-upgrade code is kinda going to have to be in state anyway, isn't it?
<rogpeppe> fwereade: i wasn't imagining it was
<rogpeppe> fwereade: i'd thought we'd have some specialised code which knows about the schemas for both versions and can run some mongo bulk change stuff.
<rogpeppe> fwereade: if the code is in state, then we're going to have loads of alternative versions of the same data structures in the state package indefinitely.
<rogpeppe> fwereade: but i take your point about multiple binaries
<rogpeppe> fwereade: perhaps put everything in state/upgrade
<rogpeppe> fwereade: although it's possibly about more than just mongo schemas, though i can't think of any good counter examples currently
<fwereade> rogpeppe, I think that API versioning may be waiting to confuse us here
<fwereade> rogpeppe, we'll see
<rogpeppe> fwereade: is there a particular problem scenario you have in mind there?
<fwereade> rogpeppe, nothing specific -- just that managing two versions at the same makes me a little confused
<rogpeppe> ha, new internet fix time delayed by another two days
<Makyo> Rebuilt my dev environment this morning, but I'm still getting panics around mongo when testing in trunk: http://pastebin.ubuntu.com/5626071/
<dimitern> rogpeppe: they're just messing with you now :)
<fwereade> Makyo, this remains somewhat baffling -- you can start and use an environment with `default-series: raring`, right?
<rogpeppe> Makyo: are you using the mongo from tarball or from the PPA?
<Makyo> fwereade, will try that next.  rogpeppe, 2.2.4 from a PPA, looks like.
<rogpeppe> Makyo: i recommend trying the tarball version and seeing if that makes a difference
<Makyo> rogpeppe, alright, fetching that.
<Makyo> fwereade, bootstrap succeeded, status shows agent-state: down  agent-state-info (started)  series: raring.  Will try again in a few.
<Makyo> Oh, though I haven't installed from this new env.  Let me do that again.
<fwereade> Makyo, yeah, the presence doesn't seem to be 100% reliable in the first few seconds, I think it settles down solidly after that
<fwereade> Makyo, regardless, that's looking very much like a working mongo
<fwereade> Makyo, can you check whether you have the same version at home?
<dimitern> rogpeppe: reviewed both godeps CLs
<rogpeppe> dimitern: thanks!
<rogpeppe> dimitern: responded
<dimitern> rogpeppe: my reasoning about the const usage is that you probably don't need the [1:] anyway - it'll print out a NL, which is not bad
<Makyo> fwereade, 2.2.4 on both bootstrap node and home.  I can try the tarball though
<rogpeppe> dimitern: i don't want an extra newline before the Usage line. call me anal if you like :-)
<dimitern> rogpeppe: ok :) fair enough
<rogpeppe> dimitern: and the difference between var and const is minimal here really
<dimitern> rogpeppe: LGTM then
<fwereade> Makyo, just for confirmation, can you try to run the tests on the machine you just started? I imagine it'll demonstrate the problem but it would be good to check
<rogpeppe> dimitern: cool. i'll add some notes to the usage info about the output format
<dimitern> rogpeppe: yeah, that'll be helpful, thanks
<Makyo> fwereade, Okay, will report back.
<fwereade> dimitern, hey, I think you fixed this?https://bugs.launchpad.net/juju-core/+bug/1122134
<_mup_> Bug #1122134: status must report machine provisioning errors <juju-core:New> <https://launchpad.net/bugs/1122134>
<dimitern> fwereade: yeah, I did, I'll mark it appropriately
<fwereade> dimitern, cool, thanks
<dimitern> fwereade: i had to remove a uniter test case, because it violated the compatibility checks
<dimitern> fwereade: renaming a relation in wp charm from "db" to "db2" and trying to upgrade
<fwereade> dimitern, hmm, are you sure it wasn't the only thing covering some other case as well?
<dimitern> fwereade: well, i'll propose in a bit, so you can see
<fwereade> dimitern, cool, thanks
<dimitern> https://codereview.appspot.com/9084045
<dimitern> fwereade, rogpeppe: ^^
 * dimitern bbiab
<Makyo> fwereade, Fewer failures, but still some in state.  http://pastebin.ubuntu.com/5626261/
<fwereade> Makyo, huh, very strange
<fwereade> dimitern, reviewed, should be quite a simple change but the tests are a little more involved
<dimitern> fwereade: thanks!
<fwereade> dimitern, it's just rel.Endpoint(serviceName) for each relation that needs to be checked
<dimitern> fwereade: and serviceName is s.doc.Name?
<fwereade> dimitern, yeah
<dimitern> fwereade: ok, will change
<dimitern> fwereade: how about the tests?
<dimitern> fwereade: do they look good, except that endpoints change?
<fwereade> dimitern, well, I'd love to see a mechanism whereby we could pause txn execution just before it hapens, and hook in to fuck up the state and see how it reacts
<fwereade> dimitern, but that's a bit out of scope here
<dimitern> fwereade: yeah, just "a bit" :)
<fwereade> dimitern, ;p
 * fwereade wants to dash off and work on another new thing now
<dimitern> fwereade: i was thinking of adding more tests with changing relations, but as you said we cannot change state during a transaction like that
<fwereade> dimitern, yeah, I don't think it's practical to test that behaviour via spray-and-pray
<dimitern> fwereade: i spotted a follow-up though - adding more tests to upgrade-charm to handle incompatible upgrades
<fwereade> dimitern, +100
<dimitern> fwereade: will add a card then
<fwereade> dimitern, cheers
<dimitern> fwereade: how can I get a fresh s inside a service method without Refresh() ?
<dimitern> fwereade: s = s.st.Service(s.doc.Name) ?
<dimitern> fwereade: changing the method receiver like that seems wrong..
<mgz> dimitern: fun branch for you to review if the upload ever finishes...
<dimitern> mgz: sure
<mgz> it's only like I'm touching every source file, what's the issue rietveld?
<mgz> dimitern: up, 9104045
<fwereade> dimitern, yeah, that'd be fine
<mgz> fwereade: ^you may also want to eyeball
<dimitern> mgz: will look shortly
<fwereade> mgz, cheers
<dimitern> fwereade: Assert: append(txn.DocExists, sameRelCount...), doesn't seem to work (txn.DocExists is an "ideal string" and sameRelCount := D{{"relationcount", s.doc.RelationCount}})
<fwereade> dimitern, hmm, perhaps DocExists is implicit in any other field check?
<dimitern> fwereade: so skip it?
<dimitern> fwereade: yeah, it seems to work
<fwereade> dimitern, cool
<rogpeppe> fwereade: here's a sketch of how we might do major version upgrades: http://paste.ubuntu.com/5626471/
<rogpeppe> fwereade: there are actually some interesting interactions between upgrades and multitenancy that we need to discuss
<dimitern> mgz: wow, that has to be the biggest diff ever! :)
<dimitern> mgz: so how did we manage to go through packaging for the release without copyrights?
<dimitern> fwereade: updated https://codereview.appspot.com/9084045
<dimitern> rogpeppe: you too perhaps wanna look? ^^
<rogpeppe> dimitern: looking
<dimitern> mgz: interesting, how is this mojo bzr update-copyright acts so smart?
<dimitern> s/acts/actually acting/
<dimitern> a fine example why rietveld sucks for not having full diff preview :)
<mgz> dimitern: it's pretty funny, isn't it :)
<dimitern> mgz: LGTM
<dimitern> mgz: when are you planing on landing this?
<mgz> nowish, though might wait till next week just so people can bikeshed it a bit
<rogpeppe> dimitern: replied
<dimitern> rogpeppe: thanks!
<dimitern> mgz: haven't heard of parkinson's law of triviality before :) lmao
<dimitern> mgz: it would be a blast watching the yak shaving about it
<dimitern> fwereade: ping
* ChanServ changed the topic of #juju-dev to: https://juju.ubuntu.com | On-call reviewer: - | Bugs: 6 Critical, 66 High - https://bugs.launchpad.net/juju-core/
<rogpeppe> dimitern, fwereade: i'd be interested in your reaction to this straw man sketch on multitenancy: http://paste.ubuntu.com/5626707/
<rogpeppe> hazmat: ^
<hazmat> rogpeppe, that's a strange definition of multi-tenancy
<rogpeppe> hazmat: perhaps so
<rogpeppe> hazmat: any definition i tried to make seemed to be pushing me in that direction though
<rogpeppe> hazmat: this was the definition i started with:
<rogpeppe> We want to alow several state instances to be served from the same machine or,
<rogpeppe> in a high-availability context, to be able to have an arbitrary n-m mapping
<rogpeppe> from server processes to machines that the server processes run on,
<rogpeppe> with restrictions as deemed appropriate.
<rogpeppe> hazmat: this is my rough draft text for the blueprint: http://paste.ubuntu.com/5626724/
<hazmat> rogpeppe, the common definition is a single api endpoint that knows how to isolate resources for clients. a separate endpoint per client per client might be  viable, but it scales poorly
<hazmat> rogpeppe, goal we want to amortize the cost of a set of ha state servers over many tenants
<rogpeppe> hazmat: there are two state servers we're talking about here
<rogpeppe> hazmat: one is mongod
<rogpeppe> hazmat: the other is the juju API state server
<rogpeppe> hazmat: and then there's the matter of the environment management agents too
<rogpeppe> hazmat: the solution i'm thinking about does amortise the cost of the mongod servers
<hazmat> rogpeppe, yes.. and we want a set of ha for all.. not a per process per tenant.
<hazmat> imo
<rogpeppe> hazmat: i don't know that that's too bad
<rogpeppe> hazmat: it's much much better than one instance per client :-)
<rogpeppe> hazmat: and means environments are naturally isolated
<rogpeppe> hazmat: i'm sure we could easily have 500 or so API servers per instance
<dimitern> rogpeppe: looks solid, if not a bit overcomplicated in places
<hazmat> rogpeppe, so they each get a different port?
<rogpeppe> hazmat: i guess so
<hazmat> i'm skeptical, but it would be nicer to talk about in person.
<hazmat> scaling O(n) tenants is going to be a problem for some use cases
<hazmat> like cloud ;-)
<ahasenack> hm, the way the development in juju-core goes, bugs stay open even after they are merged
<ahasenack> so I don't get a notification that a bug has been fixed already (the branch was merged),
<ahasenack> because the bug status doesn't chnage
<ahasenack> change
<ahasenack> case in point: #1172895 which was blocking me
<_mup_> Bug #1172895: relation-list incompatibility with pyjuju: -r <juju-core:Fix Committed by fwereade> <https://launchpad.net/bugs/1172895>
<ahasenack> I marked it as "fix committed" just now, because I saw that the branch was merged
<ahasenack> note that the review request is still up in LP, because you guys use something else
<ahasenack> if you are using something else for reviews, why not bite the bullet and use that same something else for bugs? If you link lp bugs to it (if supported), then the bug status will be good
<thumper> morning
<bigjools> morning
<thumper> hi bigjools
<bigjools> wazzup big t
<thumper> bigjools: not a lot
<thumper> bigjools: doing some side hacking... in go
<thumper> bigjools: a more useful logging package
<bigjools> \o/
<thumper> just writing some more tests, then I'll push to LP
<thumper> calling it ...
<bigjools> we spent some time looking at errors recently
<thumper> wait for it
<thumper> golog
<bigjools> and how we could improve them
<bigjools> your imagination is astounding :)
<thumper> i know, right?
<thumper> has the standard ideas
<thumper> modules, variable levels, writers and formatters
<bigjools> loggo would have been amusing
<thumper> bigjools: there is still time...
<thumper> I think I may well change it to that
<thumper> I was going for consistency
<thumper> but amusing is good
<thumper> and it still fits
<bigjools> :)
<thumper> bigjools: renamed
<bigjools> \o/
<thumper> bigjools: to be honest, I think jam suggested the same thing last night :)
<bigjools> haha
<bigjools> it sounds antipodean
#juju-dev 2013-05-03
<thumper> davecheney: ping
<thumper> unping
 * thumper wanders off for a bit
<rogpeppe> wallyworld: ping
<wallyworld> rogpeppe: hello
<rogpeppe> wallyworld: i was just looking at the simplestreams stuff
<rogpeppe> wallyworld: and realised that i didn't have the first idea about what it's about
<wallyworld> me either :-)
<rogpeppe> wallyworld: why "streams" ?
<wallyworld> rogpeppe: i have no idea how the naming was done, i just inherirted the data format. there's a lp project at lp:simplestreams
<wallyworld> this stuff was invented by scott moser
<wallyworld> i just provided the go implementation to use it
<rogpeppe> wallyworld: yeah, i've been looking at that and trying to work out the reasoning behind the abstractions
<wallyworld> since that's what the image metadata is being done in moving forward
<wallyworld> there was a big email thread on some list somewhere but i haven't seen it - not sure how far ago it was
<rogpeppe> wallyworld: is there a good overview of simplestreams somewhere?
<wallyworld> not that *I* know - I just looked at some readmes in the project
<wallyworld> i got the lp:simplesytreams source code and poked around a bit
<wallyworld> when i started this stuff, i hadn't even seen the project, so had to reverse engineer everything :-(
<rogpeppe> wallyworld: what did you use as a reference for the data format?
<rogpeppe> wallyworld: just the simplestreams source code?
<wallyworld> the sample files :-(
<rogpeppe> oh lawks
<wallyworld> and then i read the source code
<wallyworld> and the readmes and some gaps were filled in
<wallyworld> and also, i was told via email a few extra tidbits
<wallyworld> rogpeppe: not the best way to implement something is it :-(
<rogpeppe> seems weird.
<wallyworld> i'm happy though it works for ec2 and openstack (canonistack) data out of the box
<rogpeppe> wallyworld: and as seems to be usual for something with "simple" in the protocol, it seems anything but :-)
<wallyworld> lol
<rogpeppe> s/protocol/name/
<wallyworld> rogpeppe: some of the alias stuff was forced on scott by someone else
<wallyworld> that adds to the complexity
<wallyworld> rogpeppe: from mgz: https://lists.canonical.com/mailman/private/cloud/2013-March/004476.html
 * wallyworld goes to read the spec of the thing he has just implemented
<rogpeppe> wallyworld: i can't seem to authenticate to that archive
<wallyworld> hmmm. you might need to be subscribed already to that mailing list
<wallyworld> you can manage your subscriptions via launchpad
 * wallyworld has to go to soccer
<mgz> I was trying to not post links people can't read to public channel, but it doesn't matter much :)
<mgz> it's not a launchpad list, you need to sign up via the we interface there
<mgz> *web
<rogpeppe> i guess i'll have to wait for someone to react to the subscription request
<rogpeppe> wallyworld: i see stuff in the go package that doesn't seem to be mentioned in the simplestreams project. e.g. i can't find any occurrence of the string "csrn" (for the RegionAlias field). where's that stuff defined?
<wallyworld> rogpeppe: i don't think crsn is defined as a supported attribute - it is merely a key into the alias map
<wallyworld> it could be called foobar
<wallyworld> but for our purposes, we can assume it's there
<rogpeppe> wallyworld: and the code would still work if we renamed it to foobar?
<wallyworld> no. but it's easier that trying to deal with totally generic interface{} types in Go
<wallyworld> rogpeppe: the EC2 metadata is the only thing that uses that attribute afaik
<wallyworld> and it will always be crsn (customer region short name)
<rogpeppe> wallyworld: so where's it defined? what code has populated that attribute? it seems like it's part of the data format we're parsing.
<wallyworld> rogpeppe: there's tools in lp:simplestreams (shell scripts and pythin modules) to generate the data i believe
<wallyworld> rogpeppe: the only reason crsn is there (and the alias stuff) is that gustavo didn;t like the verbosity
<wallyworld> so endpoint and region we pulled out of the ec2 image records
<rogpeppe> wallyworld: i think i have a slight difficulty with simplestreams as a name for the go package, which sounds very generic, but actually we're interested in parsing something that was produced by simplestreams but is actually more (and less) than that
<wallyworld> i have no attachment to the name
<rogpeppe> wallyworld: cool
<wallyworld> i just wanted something in it's own package that could read the image data we need to consume
<rogpeppe> wallyworld: it's really "imagemetadata" or something not too far from that, i think
<wallyworld> sounds good to me
<rogpeppe> wallyworld: maybe just "images" :-
<rogpeppe> )
<wallyworld> that works also :-)
 * wallyworld really has to run away now
<rogpeppe> wallyworld: have fun!
<wallyworld> will do. thanks for looking at the crap code i wrote
<dimitern> fwereade: https://codereview.appspot.com/9084045/
<fwereade> dimitern, cheers
<fwereade> dimitern, did you have a change of heart re errRefresh? I thought we decided last night that we could actually eliminate it
<fwereade> dimitern, by ignoring the initial value of the service's relation count
<fwereade> dimitern, and just asserting that by the time the txn runs, it will have as many relations as we observed it to have when building the txn
<dimitern> fwereade: I did remove s.Refresh(), but kept the error to signal the need to get a fresh service from state in the loop, where it should be i think
<fwereade> dimitern, I don't actually think we need to refresh the service at all, because we don't need to look at RelationCount and we don't need a fresh service to get the current .Relations()
<dimitern> fwereade: well, you're basically agreeing with rogpeppe about not using s.doc.RelationCount then?
<fwereade> dimitern, I thought that not needing a frsh relationcount was the major epiphany last night, but it is possible that it was all in my head
<fwereade> dimitern, we need to assert it, but we don't need to validate it when building the txn
<rogpeppe> fwereade: +1
<rogpeppe> fwereade: that's what i was trying to get at
<fwereade> rogpeppe, dimitern: yeah, this stuff is tricky to communicate
<dimitern> ok, i'll remove the refresh and keep the assert(RC, eq, len(relations))
<rogpeppe> dimitern: sounds good
<fwereade> dimitern, that's perfect
<fwereade> dimitern, LGTM with comments/suggestions in the CL then :)
<dimitern> fwereade: cheers
<jamespage> hey - can anyone confirm whether bug 1175958 is also a problem from juju-core?
<_mup_> Bug #1175958: New peer relations not created when upgrading charms <amd64> <apport-bug> <raring> <juju:New> <juju (Ubuntu):New> <https://launchpad.net/bugs/1175958>
<jamespage> I've not got my maas/juju-core env quite right yet so can't confirm quickly
<dimitern> jamespage: i fixed this in juju-core
<dimitern> jamespage: it's in the 1.10 release
<jamespage> dimitern, +1 - great
<mgz> no one has been poking me for reviews today...
<dimitern> mgz: it's a slow day it seems
<mgz> everyone's packing :)
<mgz> packing BYTES
<mgz> there are some existing reviews to go over, and my own bits to land
<dimitern> yeah, i'll start packing soon as well
<rogpeppe> mgz: it's not mine, but i've been reviewing this and i'd like to see your views on it: https://codereview.appspot.com/9138044/
<mgz> yeah, need to go over that one
<rogpeppe> mgz: there are some issues with the parsing, but i'm concerned whether it's exposing the right level of detail. i'm wonder if a higher level interface might work better.
<rogpeppe> gah!
<mgz> nogah
<mgz> it will all be okay rog
<rogpeppe> mgz: apparently the cable is "mostly" been fixed
<rogpeppe> s/is/has/
<mgz> ehehe
<rogpeppe> *sigh*
<rogpeppe> fwereade, dimitern, mramm: i think i'm going to give up
<rogpeppe> fwereade, mramm: please let me know if there is anything i should know about
<dimitern> rogpeppe: bad luck
<fwereade> rogpeppe, I think the gist is "see you soon"
<rogpeppe> fwereade: \o/
<fwereade> rogpeppe, is there a pithy name for the "1-buffered chan for mutexed field access" pattern?
 * rogpeppe thinks
<rogpeppe> fwereade: not sure. "chan as resource holder" is about as good as i've got
<rogpeppe> fwereade: what's context will the name be used in?
<fwereade> rogpeppe, I'm just about to propose it
<fwereade> rogpeppe, dimitern: opinions on https://codereview.appspot.com/9175043 much appreciated
<dimitern> fwereade:  will look shortly
<rogpeppe> fwereade: will have a look in a bit. i have to take my lunch break and eat food and acquire heavy sacks of compost.
<fwereade> dimitern, rogpeppe: np
<fwereade> mgz, if you're on reviews, https://codereview.appspot.com/9175043 might be interesting
<mgz> ta
<dimitern> fwereade: reviewed
<fwereade> dimitern, cheers
<hazmat> fwereade,  what's the status of --config support for deploy?
<fwereade> hazmat, it's not complete but I don't anticipate problems finishing it off
<fwereade> hazmat, but it was a bit messier than I hoped and it's taken a little while to massage into shape
<hazmat> fwereade, cool.. its the last issue i think before people can start using core for their deployer stacks (ostack, etc)
<hazmat> i'm holding off an announce of the new core compatible deployer till it lands
<fwereade> hazmat, awesome -- I probably won't propose it today, but hassle me in person if I haven't done so on sunday ;p
<hazmat> fwereade, thanks will do, and safe travels
<fwereade> hazmat, cheers, and you
<fwereade> hazmat, btw, re wedged unit status, can it be resolved by bouncing the unit agent?
<hazmat> fwereade, i've had multiple wedged causes..
<fwereade> hazmat, ah sorry, I meant the resolved/hook-error one
<hazmat> fwereade, i'll check next time i run into it.. its the deployer running that helps me trigger it, but thats in holding pattern and basically done.  the other one looked like more of a state issue.. the status reported missing provider state for the machine.
<hazmat> fwereade, i suspect that indeed a restart should fix the resolved wedge
<fwereade> hazmat, the other one is interesting too, I made a note on the bug
<fwereade> hazmat, finding out what actually happened to the instance is the issue there -- it *looks* like an environ issue not a state issue to me
<hazmat> fwereade, yes.. but i think its juju modifying/terminating the instance. that one needs more analysis.
<hazmat> fwereade, as afar getting foresenic evidence for this stuff, is there a good set of known things to capture?
<hazmat> fwereade,  i was figuring i can grab the unified log and a mongodb dump which should cover everything afaics
<fwereade> hazmat, that should cover it, I think
<rogpeppe> fwereade: did you see my text around major-version upgrades yesterday?
<fwereade> rogpeppe, I saw it go by and totally failed to do anything with it
<fwereade> rogpeppe, sorry :(
<rogpeppe> fwereade: that's fine. i'd appreciate your thoughts at some point though.
<hazmat>  rogpeppe where
<rogpeppe> hazmat: http://paste.ubuntu.com/5629750/
<hazmat> rogpeppe, thanks
<hazmat> rogpeppe, you might also want to account for local changes made to machines during major upgrades
<rogpeppe> hazmat: what kind of thing are you thinking of?
<hazmat> rogpeppe, i modeled as a separate job upgrade that the agent restarted itself (only that job) into upon seeing the major watch trigger, and could then do local changes after seeing the db upgrade complete.
<rogpeppe> hazmat: that's a good point
<hazmat> rogpeppe, ie. upgrade major.. barrier for agents into restart mode. db upgrade. local upgrade. restart
<rogpeppe> hazmat: perhaps a similar thing should apply as at the server side. we could have a subcommand "upgradeclient" or something
<hazmat> rogpeppe, its critical that agents signal into the barrier that their in restart mode, else state usage is uncertain to the db upgrader
<rogpeppe> hazmat: yes, that's the intent
<rogpeppe> hazmat: although i see i've managed to omit that detail
<rogpeppe> hazmat: updated http://paste.ubuntu.com/5629770/
<hazmat> rogpeppe, another item worth calling out is that the upgrade coordinator is happening server side, ie possible long run, don't depend on client staying connected
<rogpeppe> hazmat: yes. that's kind of implicit, but probably worth mentioning.
<rogpeppe> hazmat: all we need is some indicator from each client.
<hazmat> rogpeppe, re api wait for reporting upgrade status. a minimal api impl for upgrade that just reports status sounds appropriate
<hazmat> either status of agents into the barrier, or simple stage and status.
<hazmat> its trickier.
<rogpeppe> hazmat: that's not so easy though, as we need to upgrade all the API servers too
<hazmat> rogpeppe, yeah.. i'm saying upgrade job can run a minimal api
<hazmat> er.. api servers have a minimal upgrade api job
<rogpeppe> hazmat: that might work actually - the upgrade job could publish its own address as the only API server address
 * hazmat bows out to a meeting
<rogpeppe> hazmat: but since the clients are all going to have to reconnect anyway, i'm not sure that emulating a subset of the API is necessary
<rogpeppe> hazmat: see ya
<rogpeppe> right, i'm off
<rogpeppe> see y'all in Oakie!
<mgz> so, their demo just demostrated one bug, then that terminating a machine juju-core doesn't recover, unlike juju which starts a new machine
<hazmat> mgz, which demo?
<TheChistoso|2> hi guys -- sorry to bother you here, but i need help working around https://bugs.launchpad.net/juju-core/+bug/1172973
<_mup_> Bug #1172973: sync-tools requires aws credentials be set in the (shell) environment <juju-core:Confirmed> <https://launchpad.net/bugs/1172973>
#juju-dev 2013-05-05
<fwereade> TheChistoso|2, hi, did you get a response to your question about lp:1172973?
<fwereade> TheChistoso|2, if not, the source bucket is actually public: you can set it up by hand if necessary by copying just the tools you need from http://juju-dist.s3.amazonaws.com/ to matching paths in your public-bucket
<fwereade> TheChistoso|2, probably the only one you'll *need* is tools/juju-1.10.0-precise-amd64.tgz
<mgz> hm, are there any blueprints filled in currently that are good templates?
<mgz> I'm really not sure what to put for one of mine
<mgz> <https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-juju-2-delivery> seems relevent for most of us
#juju-dev 2014-04-28
<bodie_> is state/state.go State the entire Juju state or the state of a service?  sorry if this should be obvious
<TheMue> bodie_: State is the entry point into the whole state.
<TheMue> bodie_: the service of one service can be retrieved from a State instance with Service(<name>)
<bodie_> ok, thanks :) that's what I figured, but I wanted to be sure
<TheMue> yw
<bodie_> is there a Watcher for every Unit or only for the Service?
<psivaa> hello, curious if i could get some help in finding the actual cause of bootstrap failure on a private cloud: http://paste.ubuntu.com/7351214/ ?
<jcw4> psivaa: it looks like dns resolution issues... are you able to ping the hosts that are failing from your private cloud location?
<psivaa> jcw4: i was able to resolve earlier. but can't now. i'll first cleanup the dns issue and retry. thanks
<psivaa> jcw4: dns was the issue, just did a bootstrap. thanks
<jcw4> psivaa: great. glad it's working
<fwereade> https://docs.google.com/a/canonical.com/document/d/1zIYTGFvVPE8VbU6OcMIACB2jKt_ISY-VuBor8TXciPM/edit#heading=h.dtiuimpw801x
<mgz> https://bugs.launchpad.net/juju-core/+bug/1231526
<_mup_> Bug #1231526: support vocabulary for charm config <feature> <juju-core:Triaged> <https://launchpad.net/bugs/1231526>
<mgz> fwereade: https://docs.google.com/a/canonical.com/document/d/1XZN2Wnqlag9je73mGqk-Qs9tx1gvOH0-wxMwrlDrQU4/edit#heading=h.pw5m1uty97lj
#juju-dev 2014-04-29
<psivaa> hello, help again pls, trying to figure out why I get 'error: cannot assign public address to instance' for most of the services i'm trying to deploy
<psivaa> http://paste.ubuntu.com/7358986/ is the juju status output
<psivaa> http://paste.ubuntu.com/7358312/ (quite big) is the machine-0.log
<psivaa> http://pastebin.ubuntu.com/7358553/ is a segment of the above log, probably relevant to the error
<mgz> davecheney: https://launchpad.net/~ubuntu-cloud-archive/+archive/cloud-tools-next
<davecheney> $ apt-get changelog
<davecheney> E: Handler silently failed
<davecheney> brilliant
<mgz> perrito666: http://paste.ubuntu.com/7361243/
<Makyo> jcsackett, https://github.com/makyo/juju-gui/tree/ecs-refactor
#juju-dev 2014-04-30
<mgz> menn0: plz review! :) https://codereview.appspot.com/94900043/
<bodie_> why do documents in the state/state.go State struct need a handle back to the State?
<bodie_> er.... not documents, types.  such as state.Settings
<bodie_> which wrap documents
<bodie_> is it necessary to use the state runner.txn stuff if I'm updating a document that no other code will touch?
<bodie_> (state/state.go)
<bodie_> I guess it's probably necessary to avoid collision with other transactions
<mgz> dimitern: https://codereview.appspot.com/93990043
<davecheney> REMOVE ALL THE SAFTY CHECKS!
<davecheney> ) % juju destroy-environment -y $(juju switch)
<jpds> davecheney: Good. â¤
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1305386
<_mup_> Bug #1305386: state/apiserver: multiple data races <juju-core:Triaged> <mgo:New> <https://launchpad.net/bugs/1305386>
<davecheney> Bug: https://bugs.launchpad.net/bugs/1305386
<davecheney> error: ERROR: Failed to update bug task: Server returned 400 and body: milestone_link: Constraint not satisfied
<davecheney> Wheee!
<mgz> perrito666: nova/api/openstack/wsgi.py from https://github.com/openstack/nova.git
<mgz> menn0: is your branch with more comment fixes up somewhere?
<menn0> mgz: not yet. i'm seeing a unit test failing consistently on my machine. Unlikely to be related to my change but I'd like to see a clean run before submitting for review just in case.
<mgz> menn0: what's the failure?
<menn0> mgz: it's a uniter test: TestUniterUpgradeConflicts
<perrito666> mgz: just to recap, you need a function that can give you the json encoded version of an error, shouldn't this be a method from the errtype?
<mgz> menn0: that one does pass for me:
<mgz> $ (cd worker/uniter&&go test -gocheck.v -gocheck.f=TestUniterUpgradeConflicts)
<mgz> PASS: uniter_test.go:998: UniterSuite.TestUniterUpgradeConflicts	6.855s
<mgz> OK: 1 passed
<mgz> PASS
<mgz> ok  	launchpad.net/juju-core/worker/uniter	11.379s
<mgz> perrito666: the argument for it not being a method, is we want it to handle errors of other types
<mgz> as in, try casting to our type of error, and use the code/Name()/message if so
<menn0> mgz: i've just tried with trunk and it still fails consistently for me
<mgz> if not, the code is 500 and the json we generate with the error string
<perrito666> mgz: I see, where do you want that ?
<mgz> menn0: it's likely something environmental, can you `... 2>&1|pastebinit`?
<mgz> perrito666: probably novaservice for now
<menn0> mgz: the output is pretty huge: http://paste.ubuntu.com/7369606/
<mgz> menn0: it's a lovely test
<menn0> failure is here: uniter_test.go:1813
<menn0> mgz: that's one word for it
<mgz> menn0: failure is from l969 in the file really
<mgz> why that charm is on rev1 rather than rev2 is an open question
<menn0> mgz: looks like Ian and Will have their names all over this file. Ian said he'll help me debug this later on.
#juju-dev 2014-05-01
<davecheney> https://code.launchpad.net/~dave-cheney/juju-core/170-update-to-mgo-latest/+merge/217839
<davecheney> can someone hulk smash this revision
<davecheney> it will never apply, for reasons obvious from the error in the MP
<menn0> mgz: turns out Will had already fixed this test but hadn't landed it in trunk. I've just marked the review as approved (with him sitting next to me).
<perrito666> mgz: looks like what you need? lp:~hduran-8/goose/json_encoded_errors
<mgz> menn0: good job
<mgz> perrito666: loooooking
<menn0> mgz: those changes landed and the tests pass now on my machine
<menn0> mgz: hooray :)
<perrito666> knock, knock, knock, mgz
<perrito666> knock, knock, knock, mgz
<mgz> perrito666: just got the net back
<mgz> comment #1: we want the fallback 500 to include the unknown error string
<mgz> perrito666: looks okay past that
<mgz> -ify is a bit funny
<perrito666> mgz: JSON, JSONEncode ?
<mgz> AsJSON?
<mgz> naming bikeshedding anyone?
<perrito666> that sounds a bit like fantastic twins :p but I can fix that
<perrito666> how do I lbox propose something and mark that it depends on another proposal?
<mgz> perrito666: it's a flag on propose, do --help
<menn0> tedious but easy review please: https://codereview.appspot.com/95920043
<mgz> menn0: you sorted out the appspot login thing?
<perrito666> mgz: https://codereview.appspot.com/97870044
<bodie_> hello all, we're getting a bit hung up on the lifecycle stuff since we're trying to implement a new collection of docs for Action
<bodie_> is there a technique for getting a unique sequential ID for a new document from txn?
<stokachu> hazmat: see where that websocket library now supports py3/2?
<hazmat> stokachu, awesome!
<hazmat> stokachu, don't worry someone will rewrite deployer in go
<stokachu> hazmat: haha
<stokachu> deployer's logic for waiting on services before relating should go into the juju-core code
<stokachu> like a 'services pending relation' queue
<davecheney> jamespage: is there a utopic ami ?
<davecheney> or cloud image that I can test with ?
<jamespage> davecheney, http://cloud-images.ubuntu.com/utopic/20140429/
<davecheney> jamespage: thanks
<davecheney> i guess they aren't published to ec2
<davecheney> i'm addicted to the ec2 links on the cloud images page
<jamespage> davecheney, maybe not yet
<jamespage> davecheney, utlemming would know when
<perrito666> mgz: strangely, the merge failed https://code.launchpad.net/~hduran-8/goose/testservice_errors/+merge/217818
<dimitern> perrito666, vladk|offline, mgz, a quick review about showing networks in status? https://codereview.appspot.com/91000043
<perrito666> dimitern: lgtm I added a comment there
<dimitern> perrito666, cheers!
<bodie_> o/ jcw4
<jcw4> \o
<jcw4> bodie_: I'll start adding the actionids to the unit
<bodie_> sweet
<bodie_> i'm hitting up rick for the hangout
<jcw4> and investigating how the unit would pull the actions off and execute them
<jcw4> bodie_: cool
<mgz> perrito666: that's odd
<adeuring1> https://codereview.appspot.com/96950043
<mgz> dimitern: did a post-landing review
<dimitern> mgz, cheers
<mgz> perrito666: lp:~gz/juju-core/os_listservers_errors_1222413
<perrito666> mgz: can you read me? if so pass me again the link, my irc proxy swallowed it
<mgz> perrito666: lp:~gz/juju-core/os_listservers_errors_1222413
<dimitern> mgz, https://codereview.appspot.com/90970045 updated status as suggested
<mgz> perrito666: https://codereview.appspot.com/91010044
<perrito666> mgz: rvd
<menn0> review please (Python script): https://codereview.appspot.com/94010044
<perrito666> mgz: ok, finally got your branch :
<perrito666> :p
<mgz> :D
<mgz> perrito666: landed first stage of your goose branches
<perrito666> mgz: cool
<perrito666> mgz: http://4.bp.blogspot.com/-YK_lIV2xxPs/T6iQnXqubbI/AAAAAAAAAUo/SY3DCN5mJzs/s1600/droopy-i-am-so-happy.jpg
<perrito666> mgz: hey around your code, I cannot return json encoded error, I ether do a sendJSON with the encoded error and return the error or I return the actual error and you call yourself the json encoding
<mgz> perrito666: my code?
<mgz> or the bit in service_http.go?
<mgz> do a sendJSON for now if that's what you need
<perrito666> the bit on service_http.go
<mgz> we want unified error handling later, but shortest path to sane error response is right for now
<perrito666> mgz: I broke the universe https://pastebin.canonical.com/109483/
<perrito666> :p
<perrito666> will see whats up later, gotta change chairs now
<mgz> perrito666: ...what did you do...
<mgz> perrito666: looks like your cast is not working
<perrito666> mgz: nah, its the sendJSON part that is broken
<perrito666> that I broke that is
<mgz> perrito666: having any joy?
<jcw4> bodie_: ping
<mwhudson> how do i prevent bootstrap destroying the env if it fails?
<jcw4> bodie_: bits3rpent - I probably can't make the standup... fyi
#juju-dev 2014-05-02
<jcw4> 1~
<jcw4> bodie_: ping
<jcw4> how do I connect to a running state server mongo instance?
<jcw4> mongo -ssl 0.0.0.0:37017 gives connect failed, assuming it needs username/password?
<bodie_> jcw4, just saw your message from yesterday wrt Mongo
<jcw4> bodie_: :)
<bodie_> I'm not sure how to connect to a live stateserver instance of Mongo, does anyone else know?
<jcw4> I'm just trying to figure out the password
<jcw4> the admin-secret in the env file looks like it is the hash of a password
<jcw4> don't know how to use it to connect
<bodie_> hmm
<bodie_> perhaps we should be looking at where it spawns the connection instance
<jcw4> yep
<bodie_> I guess that's in bootstrap?
<jcw4> not directly
<jcw4> (cmd/juju/bootstrap.go that is)
<bodie_> perhaps try my-admin-secret
<jcw4> fail
<jcw4> :)
<bodie_> doh
<bodie_> i'm poking around in cmd/jujud/bootstrap_test.go
<jcw4> bodie_: hmmm; I think the default password is empty ?  waiting for my local bootstrap to complete to verify
<jcw4> bodie_: nope
<mgz> jcw4: see https://github.com/kapilt/juju-dbinspect
<jcw4> mgz: thank you!
<jcw4> bodie_: ^^
<jcw4> bodie_: http://blog.labix.org/2012/08/22/multi-doc-transactions-for-mongodb
<mgz> evilnickveitch: any progress?
<mgz> waigani: https://ec2.us-east-1.amazonaws.com/
#juju-dev 2014-05-03
<jcw4> /home/ubuntu/irclogs/FreeNode/#juju-dev.log:10:39 < jcw4> :)
#juju-dev 2015-04-27
<alexisb> hey wallyworld dont mind me messing with the api doc :)
<wallyworld> alexisb: sure, go for it :-) it's still very much WIP
<alexisb> yep, just reviewing things before I start linking them back
<wallyworld> alexisb: we'll want to add a server person for cloud init work
<alexisb> wallyworld, ack
<alexisb> what about landscape?
<alexisb> should I add dean
<wallyworld> yeah, still figuring out exactly what we want in that area, but we should make sure there's an overall consensus
<alexisb> ack
<voidspace> morning all
<voidspace> morning all
<perrito666> morning
<aznashwan> hey; could I please get a quick review on http://reviews.vapour.ws/r/1486/?
<aznashwan> note that Gabriel does a good job of explaining the reasoning behind the package's addition in response to Casey's comment (first one).
<natefinch> aznashwan: hey, that's a really neat and useful package for windows devs
<natefinch> aznashwan: I guess there wasn't one already written anywhere?
<aznashwan> natefinch: I had that written a while back but we only just got round to refactoring the Windows service package and its usecase popped up there...
<mup> Bug #1195757 changed: Package Juju for Windows <improvement> <windows> <juju-core:Fix Released> <https://launchpad.net/bugs/1195757>
<katco> wwitzel3: natefinch: stand up
<mup> Bug #1449050 was opened: Juju get returns a yaml file that can't be passed to set <juju-core:New> <https://launchpad.net/bugs/1449050>
<mup> Bug #1449054 was opened: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1449054>
<aznashwan> natefinch: ping
<aznashwan> alexisb: ping
<alexisb> hey aznashwan whats up?
<aznashwan> alexisb: fresh out of holiday; thanks
<aznashwan> alexisb: just wanted to point out this thing: http://reviews.vapour.ws/r/1486/
<aznashwan> alexisb: @natefinch pointed out that we should put this in a separate package
<aznashwan> alexisb: I need to know who'd be the person who I'd best talk to about it...
<alexisb> aznashwan, are you in malta?
<aznashwan> alexisb: nope; back home.
<alexisb> ack
<alexisb> natefinch should be able to help
<alexisb> aznashwan, I can follow up with him when he is back online
<aznashwan> alexisb: much obliged :D
<natefinch> alexisb. aznashwan: back
<alexisb> natefinch, aznashwan is looking for guidance with separating windows securestrings into its own package (based on you suggestion)
<natefinch> alexisb: I agree with my suggestion ;)
<alexisb> lol
<alexisb> I trust you can help guide him as well :)
<natefinch> alexisb: sure thing
<alexisb> thanks natefinch
<aznashwan> natefinch: ping
<natefinch> aznashwan: howdy
<aznashwan> natefinch: so; any idea how we're going to proceed?
<mup> Bug #1449123 was opened: EnvJujuClient24._shell_environ does not correctly handle juju_home <juju-core:Triaged> <https://launchpad.net/bugs/1449123>
<aznashwan> natefinch: The thing itself is really tiny; so I don't personally see it as a worthy package in it of itself...
<natefinch> aznashwan: it's not about the size, it's about a unit of functionality
<aznashwan> natefinch: Gabriel at one time suggested I either keep it on me (currently it's been made private; vanity imports aren't that cool); or put it in on Cloudbase
<natefinch> aznashwan: anything which is worthy of its own package is worthy of its own repo, if you want other people to use it.  And we definitely want other people to use it, because open source etc.
<natefinch> aznashwan: did you build it on your own time, or on cloudbase's time?
<aznashwan> cloudbase's time/my own I'm not sure I was an unpaid intern :))
<natefinch> aznashwan: if you did it while "at work" then it belongs to cloudbase (assuming they do the same thing most companies do)
<aznashwan> natefinch: as you guys will
<natefinch> aznashwan: then it's up to you and cloudbase to decide where you want to release it.  I don't really care if it's under github.com/juju or github.com/cloudbase or wherever.  But I do think it deserves its own top level repo.
<aznashwan> natefinch: thanks for the insights
<natefinch> aznashwan: in theory it might work inside a more general windows-specific repo, but juju doesn't have one, and utils is too much of a jumble of stuff to really count.
<aznashwan> natefinch: that it is; sadly
<natefinch> aznashwan: ultimately, the point is to make it available and usable to the Go community in general.  If its in its own top level repo, that has the least barrier of entry.  It doesn't matter that it's small. There's plenty of packages in the standard library that are similar size.
<aznashwan> natefinch: also; while we're on the subject of making stuff repos of their own; Gabriel and I have been meaning to push for a separate version package for a while now
<natefinch> aznashwan: actually, small is nice, because it means you can be very precise about only importing what you really need, instead of importing the 40-million-windows-things package.
<natefinch> aznashwan: ahh yes, definitely
<natefinch> aznashwan: after looking at the problems with the packaging command stuff, I was convinced of the need for that as well.
<aznashwan> natefinch: I mean utils alone has a handful of places version detection is required beyond what simple build constraints can account for...
<aznashwan> natefinch: thanks a lot for all the support
<natefinch> aznashwan: welcome :)
<mup> Bug #1449123 changed: EnvJujuClient24._shell_environ does not correctly handle juju_home <juju-ci-tools:In Progress by abentley> <https://launchpad.net/bugs/1449123>
 * wwitzel3 shakes his fists at the sky
<mup> Bug #1449050 changed: Juju get returns a yaml file that can't be passed to set <config> <juju-core:New> <https://launchpad.net/bugs/1449050>
<natefinch> did we change what's required in environments.yaml?  Because after updating to master, when I do juju status on my amazon environment I get "invalid EC2 provider config: control-bucket: expected string, got nothing"
<mup> Bug #1447392 changed: ssh args list too long when bootstrapping <bootstrap> <regression> <juju-core:Fix Released by bteleaga> <https://launchpad.net/bugs/1447392>
<mup> Bug #1447846 changed: Hooks don't fire after upgrade 1.23.0 <hooks> <regression> <upgrade-juju> <juju-core:Fix Released by menno.smits> <juju-core 1.23:Fix Committed by menno.smits> <https://launchpad.net/bugs/1447846>
<natefinch> cmars: you around?
<cmars> natefinch, what's up?
<natefinch> I updated the review you looked at on friday: http://reviews.vapour.ws/r/1487/diff/#
<natefinch> cmars: it includes a test that I noticed was skipped on windows, even though it was testing the windows-only code :/  So I fixed the test.
<natefinch> wwitzel3: if you do juju status on an unbootstrapped amazon environment, do you get "invalid EC2 provider config: control-bucket: expected string, got nothing" ?  That seems like a new error, and incorrect, to boot.
<wwitzel3> natefinch: you asking me to verify that locally? or is this a change I landed?
<wwitzel3> natefinch: I agree that seems like the wrong error
<natefinch> wwitzel3: asking you to  try it out if it's not too much trouble to make sure I'm not crazy :)
<natefinch> wwitzel3: I don't have any reason to believe you're responsible, I Just happened to know you were at your keyboard ;)
<wwitzel3> natefinch: yep, same error in the Error detail
<wwitzel3> natefinch: :)
<perrito666> wow, adding a simple hook ends up being 10 lines of code :|
<natefinch> perrito666: I agree, that's pretty good ;)
<perrito666> ill make sure I add that to a document so it is also easy to find which are those 10 lines
<natefinch> perrito666: 10 lines is fine if they're all together.  If they're spread across 10 files, that's bad.
<perrito666> 5 files plus charms package
<natefinch> yeah that sucks
<perrito666> the charm package part will be fun
 * natefinch coughs  *code generation*
<perrito666> this is only valid for a very stupid hook
<perrito666> but the fact that we need to add that info to a separate repo raises all sort of doubts
<natefinch> yup
<mup> Bug #1449277 was opened: juju environment create fails on aws: invalid config <juju-core:Triaged by waigani> <https://launchpad.net/bugs/1449277>
<katco> wallyworld: want to do our 1:1 early?
<wallyworld> yup
<katco> wallyworld: cool... grabbing a drink and brt
<perrito666> anyone would be so kind? https://github.com/juju/charm/pull/122
<ericsnow> menn0: ping
<menn0> ericsnow: hi
<ericsnow> menn0: I landed your hooks/upgrade fix and wanted to make sure I didn't introduce a DB-related bug :)
<menn0> ericsnow: ok... what's happening?
<ericsnow> menn0: see #1449054 (incl. failures that predate my patch)
<mup> Bug #1449054: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1449054>
<ericsnow> menn0: I don't think I introduced anything new, but got that exact same panic the first time I tried to merge, so wanted to be sure it wasn't something I broke :)
<ericsnow> s/anything new/any new bugs/
<menn0> ericsnow: I remember looking at this issue when it happened way back in 1.21. that time it was caused by a bug in mgo/txn which was fixed.
<ericsnow> menn0: yep
<menn0> ericsnow: given what was in the fix you committed and given that it was happening also before that fix I don't think it's that
<ericsnow> menn0: so either it regressed or we are hitting some corner case in DB-related code that was added relatively recently (1.23?)
<menn0> ericsnow: yep
<menn0> ericsnow: let me take a look at what was committed anyway
<ericsnow> menn0: k, thanks
<menn0> ericsnow: I can't see the PR on GH under my name or yours
<ericsnow> menn0: https://github.com/juju/juju/pull/2127
<menn0> ericsnow: thanks... I skipped straight over it when looking at your PRs :)
<ericsnow> :)
<menn0> ericsnow: the PR looks good and I don't think there's any way this upgrade step could cause that panic in the apiserver tests
<menn0> ericsnow: I think the 2 aren't related
<ericsnow> menn0: cool, that's what I figured
<ericsnow> menn0: thanks for taking a look
<menn0> ericsnow: now the fun part: figuring out what IS causing that panic.
<ericsnow> menn0: "is fun for you but we have to clean up after"
<ericsnow> (from "the man who knew too little")
<ericsnow> :)
<mgz> ericsnow, menn0: bug 1449054
<mup> Bug #1449054: Intermittent panic: rescanned document <ci> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1449054>
<menn0> mgz: yep, had seen that
<mgz> william and I looked at the code again today, pretty clearly not from that change, and I went over the CI history and filed that bug for it
<menn0> mgz: I had just commented on it
<mgz> I think from the three cases where we caught it in CI, juju is failing to react well to an ill machine underneath it
<menn0> mgz: given that it happened before the fix for bug 1447846 went in it's pretty unlikely to be that
<mup> Bug #1447846: Hooks don't fire after upgrade 1.23.0 <hooks> <regression> <upgrade-juju> <juju-core:Fix Released by menno.smits> <juju-core 1.23:Fix Committed by menno.smits> <https://launchpad.net/bugs/1447846>
<menn0> mgz: it still shouldn't happen obviously.
<menn0> mgz: finding a link between a sick machine and problems with the txn layer will be fun
<mgz> indeed, that's why I tracked down it happening in 1.23-alpha1
 * menn0 checks something
<menn0> mgz: what was the date we first saw this problem in recent times?
<mgz> menn0: Feb 12th, rev 4bfd0056afb508b41927d871bde0604fdfb665b1
<menn0> mgz: thanks. the last dep update for the mgo repo was well before that so it's unlikely to be a mgo change
<menn0> mgz: last time we saw this issue it was a bug in mgo/txn
<menn0> mgz: what's the best way to find the console logs of all recent fails due to this panic?
<mgz> menn0: I linked the entire set in the bug
<mgz> it's just the three
<mgz> also, looking at the surrounding failures is informative
<menn0> mgz: ok cool. I wasn't sure if that was the whole set or not.
<mgz> the utopic machine was certainly not well on Friday
<mgz> my current guess is it's our driver or mongo not reacting well to a disk read error or similar and dropping data
<mgz> then being surprised when it checks later and finds its not there
<mgz> *write
<menn0> mgz: that sounds plausible
<mgz> anyway, the kind of thing where the whole machine is imminently likely to die anyway
<mgz> so failing better would be nice, but not a high priority
<menn0> mgz: where is that machine hosted?
<mgz> menn0: ec2, us-east-1
<menn0> mgz: so could we just blow it away and start again?
<wallyworld> menn0: can you remind me, that rsyslog issue we discussed last week, i forget the reason you gave for hitting that
<mgz> menn0: yup
<menn0> wallyworld: the rsyslog-gnutls package isn't installed if the os update/upgrade options are turned off
<menn0> wallyworld: and without that the rsyslog config that juju wants to use can't work
<menn0> wallyworld: it's bug 1424892
<mup> Bug #1424892: rsyslog-gnutls is not installed when enable-os-refresh-update is false <cloud-init> <logging> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1424892>
<wallyworld> menn0: ah yes, thank you
<menn0> wallyworld: were you offline just before when I wrote to you?
<wallyworld> i remember now
<wallyworld> menn0: my stupid irc connection keeps dropping
<wallyworld> NFI why
<menn0> wallyworld: looks like nate and katco have been looking at the rsyslog issue
<wallyworld> menn0: yes, moonstone is all doing bugs atm
<wallyworld> while waiting for feature clarification
<axw> wallyworld: seems there's a problem with my webcam, trying to fix now
<anastasiamac> wallyworld: standup?
<wallyworld> axw: google hates me, be there in a bit
<mup> Bug #1449301 was opened: storage: storage cannot be destroyed <storage> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1449301>
<mup> Bug #1449302 was opened: upgrades: old machines need block devices document <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449302>
<mup> Bug #1449301 changed: storage: storage cannot be destroyed <storage> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1449301>
<mup> Bug #1449302 changed: upgrades: old machines need block devices document <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449302>
<mup> Bug #1449301 was opened: storage: storage cannot be destroyed <storage> <juju-core:Triaged by axwalk> <https://launchpad.net/bugs/1449301>
<mup> Bug #1449302 was opened: upgrades: old machines need block devices document <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449302>
<menn0> wallyworld: if you have a moment, could you pls create a feature branch on the juju repo for me called "db-log"?
<wallyworld> sure
#juju-dev 2015-04-28
<menn0> wallyworld: thanks
<wallyworld> menn0: actually, i don't have credentials for jujubot, i'll ask the qa guys
<menn0> wallyworld: I would have thought you could do it with your own account?
<wallyworld> menn0: that's don
<wallyworld> e
<wallyworld> menn0: yeah, i could, i didn't realise i could :-)
<menn0> wallyworld: cheers :)
<wallyworld> np
<wallyworld> menn0: fyi, i'm working on the status 2.0 spec based on notes from the sprint; still very much wip https://docs.google.com/a/canonical.com/document/d/19ljAmIe3wVC-jpDRFl8KfvJKQzx0HIC0yuPu3cCMh7Q
<menn0> wallyworld: having a quick look
<menn0> wallyworld: looking good so far
<menn0> wallyworld: I made a few tiny typo corrections
<wallyworld> menn0: oh ty, didn't mean to burden you just yet with corrections :-)
<menn0> wallyworld: I couldn't help myself :)
<wallyworld> :-)
<sinzui> wallyworld, Do you have minute to review http://reviews.vapour.ws/r/1497/
<wallyworld> sure
<wallyworld> sinzui: you always give me the difficult ones
<sinzui> wallyworld, its an opportunity for you to say STOP, I have another issue that needs fixing
<wallyworld> sinzui: i left off the :-)
<sinzui> :)
<sinzui> wallyworld, http://reports.vapour.ws/releases/2570 shows that 1.23.2  Passed every test, even against adverse dirty substrates and flakey vivid
<wallyworld> yay
<sinzui> This is could be the highest score ever gotten
<wallyworld> now all we need is for 1.24 to do the same
<wallyworld> let's hope it's reproducable then
<axw> wallyworld: reviewed your branch
<wallyworld> ty
<wallyworld> axw: with the client facade, william didn't want to bump that beyond 0. the only other alternative is to introduce a whole new facade and stick the method on that. I started going down that path but couldn't decide on a suitable facade so took the path of least resistence (the branch was already big enough and i was trying to get it done in time before branching). given we already have AddMachinesV2, any proper refactoring work would
<wallyworld>  have to sort out that stuff also, which is getting beyond the scope of removing a storage feature flag
<axw> wallyworld: why does he not want to bump the version?
<wallyworld> gets messy - we would need to fire up api servers for both 0 and the new version, and we want to ultimately move stuff off client anyway
<wallyworld> we support uniter v 0, 1, 2, but that's a much smaller facade
<wallyworld> i was trying to avoid depending on a charm repo change with the patching but i guess that's unavoidable :-(
<wallyworld> and even with the uniter stuff, the tests are incomplete as it's hard to write common tests to cover all facade versions
<wallyworld> s/hard/tedious due to code cut and paste
<axw> wallyworld: can we at least hide this ugliness in api/client? so if the caller of AddMachines specifies storage, it calls AddMachinesV3 instead of V2
<axw> wallyworld: not sure ify ou saw this before
<axw> <axw> wallyworld: can we at least hide this ugliness in api/client? so if the caller of AddMachines specifies storage, it calls AddMachinesV3 instead of V2
<wallyworld> axw: here's a patch to the charm repo to allow juju to patch the NewCharmStore from any package https://github.com/juju/charm/pull/123
<axw> wallyworld: ?! why?
<axw> where is this needed?
<wallyworld> axw: to allow core tests to run - stuff in the apiserver/service package calls into methods in apiserver/client and the apiserver/client code calls NewCharmRepo and this needs to be patched. the apiserver/client package patches NewCharmRepo but you objected to tha being exposed to apiserver/service package
<axw> wallyworld: I object to patching across packages in general, not that one case
 * axw looks at code in question
<wallyworld> so it's easier to just patch at the source of the charm repo creation instead of indirectly
<wallyworld> axw: sadly we already patch charmrepo.CacheDir
<wallyworld> so i'm at least lining up with what's done already
<axw> yeah, I want to stop the sadness :)  I'm looking if there's a better alternative..
<wallyworld> and the client code patches across packages anyway to patch charmrepo.NewCharmRepo
<wallyworld> longer term, agreed. but this is simply to remove a feature flag to get 1.24 out
<wallyworld> it's not introducing new badness
<axw> wallyworld: service doesn't call into client, client calls into service
<wallyworld> service calls s.APIState.Client().AddCharmWithAuthorization(curl, nil)
<axw> client previously *internally* patched that function, which isn't great but at least doesn't spread the hack around
<wallyworld> to set up a test
 * axw looks again
<axw> I see
<wallyworld> yes it did, but any code can also patch cahrmrepo.CahedDir
<wallyworld> this is a consequence of starting to move service apis off the client facade
<wallyworld> and onto their own Service facade
<wallyworld> there's a whole bunch of other service apis with todos to move
<wallyworld> sadly, it we designed our code using inversion of control, this wouldn't be an issue, but we didn't :-(
<axw> wallyworld: this can be fixed by changing apiserver/service to operate on interfaces rather than *state.State. getting out of scope, so please ditch the juju/charm change and use the original patch change and add a TODO(wallyworld)
<axw> if we're adding hacks, let's at least confine them to juju/juju
<anastasiamac> axw: wallyworld: on a different note, here is the start of dynamic add - http://reviews.vapour.ws/r/1498/
<axw> anastasiamac: reviewing now
<anastasiamac> axw: tyvm :D
<axw> anastasiamac: done
<wallyworld> axw: so, just to confirm you happy to retain the x-package patching as per what i submitted so that we don't have to modify charm.v5? that was my original thought and rationale also
<wallyworld> axw: FFS, disconnected again, may have missed your reply
<axw> wallyworld: sorry, was afk. yes.
<axw> wallyworld: provided it gets replaced later with an interface and the cross-package patching removed
<wallyworld> axw: yeah, there's a bit of work to do there. won't happen for 1.24 is suspect, best we can do is 1.25. i've updated the PR on RB
<axw> looking
<wallyworld> axw: i guess i should add a MachineManager facade and stuck the new AddMachine API on that. i was previously trying to avoid missing the 1.24 branching but given that's already happened, might be best not to introduce another lagacy API to have to support
<axw> wallyworld: that would be ideal
<wallyworld> yeah, agreed. sigh. branch already soo bg
<axw> wallyworld: I already gave a shipit, you don't have to do it immediately
<wallyworld> axw: ok, i'll land but then do an immediate followup
<axw> cool
<axw> wallyworld: is that a straight backport? do I need to review again?
<mup> Bug #1449367 was opened: remove storage feature flag <juju-core:Triaged by wallyworld> <juju-core 1.24:In Progress by wallyworld> <https://launchpad.net/bugs/1449367>
<mup> Bug #1449367 changed: remove storage feature flag <juju-core:Triaged by wallyworld> <juju-core 1.24:In Progress by wallyworld> <https://launchpad.net/bugs/1449367>
<mup> Bug #1449367 was opened: remove storage feature flag <juju-core:Triaged by wallyworld> <juju-core 1.24:In Progress by wallyworld> <https://launchpad.net/bugs/1449367>
<mup> Bug #1449390 was opened: storage: charms must wait for storage to be attached before running "install" hook <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449390>
<TheMue> voidspace: sorry for not being in hangout, have to help daughter
<voidspace> TheMue: no problem
<voidspace> TheMue: it's just you and me anyway :-)
<voidspace> TheMue: I'm working on forward porting the addressable container feature flag to master
<voidspace> TheMue: nearly done
<voidspace> TheMue: hard work - just about none of the patch applied cleanly
<voidspace> TheMue: only two more files to do though
<voidspace> TheMue: that's my report :-)
<mup> Bug #1449436 was opened: Environment variables are not propagated to jujud on vivid <juju-core:New> <juju-core 1.23:New> <juju-core 1.24:Triaged by axwalk> <https://launchpad.net/bugs/1449436>
<TheMue> voidspace: so, "first aid" done, daughter has troubles with her car on the highway and asked what to do. and the other daughter is ill at home. :(
<TheMue> voidspace: I'm currently writing a little conference report about last week, the continue with the process changes with Katherine, and will then take a look in the list Dimiter sent
<voidspace> TheMue: ok
<mup> Bug #1449436 changed: Environment variables are not propagated to jujud on vivid <juju-core:New> <juju-core 1.23:New> <juju-core 1.24:Triaged by axwalk> <https://launchpad.net/bugs/1449436>
<mup> Bug #1449436 was opened: Environment variables are not propagated to jujud on vivid <juju-core:New> <juju-core 1.23:New> <juju-core 1.24:Triaged by axwalk> <https://launchpad.net/bugs/1449436>
<axw> jam: gotta go, fix up for review here FYI -- http://reviews.vapour.ws/r/1500/
<jam> axw: have a good night
<jam> axw: I believe this should also go into 1.23, right?
<jam> Since that is what we're actually releasing on V
<Mmike> lads, when I bootstrap my env and then do 'juju ensure-availability', juju will fire up additional three units, set up replicaset for mongodb and all of that. Now when unit 0 dies (the initial bootstrap unit) I can't connect to my env any more. I need to manually change my environment.jenv file and remove references to machine 0 from there. Then juju works ok again. Now I want to remove machine 0 from juju, as it no longer exists, but juju won't let me
<Mmike>  do so, saying: "ERROR no machines were destroyed: machine 0 is required by the environment".
<Mmike> Is this by design, or am I doing something wrong?
<Mmike> Using juju 1.22 currently, but observed same behaviour on 1.23
<natefinch> hooooly shit.  just went to write a fake object to fulfill the CloudConfig interface, and umm.. it has over 60 functions :/
<davecheney> that's no interface .... mother of god!
<davecheney> Mmike: machine 0 is special, it cannot be removed
<natefinch> luckily the function I need to mock it for only uses like 6 of those functions, so I can replace the interface with a more narrow one, but still, jeezus.
<natefinch> davecheney, Mmike: you definitely should be able to kill machine 0 and have your environment still work fine.
<natefinch> (when in HA)
<natefinch> I don't know about actually tell juju to remove the phantom machine from its DB, but your environment should definitely still work.
<jam> natefinch: seems a bit oversized. Though you can do ttype
<jam> type MyFake struct { CloudConfig }
<jam> and then only override the ones you actually want to
<jam> (you'll get nil pointer dereference failures for anything that gets used that you didn't define)
<jam> Mmike: so I'd like to walk through it a bit with you.
<jam> After "juju ensure-availability" do you end up with 3 or 4 total machines?
<jam> (I would expect 3)
<jam> second, after those machines are up and running, you should be able to run "juju status", and connect to that env
<jam> and then it will notice that there are more machines
<jam> and record them as alternatives in your environments.jen
<jam> It won't actually record new machines until they are up and running and connected (AIUI)
<jam> but you shouldn't have to edit your JENV normally
<jam> just you have to connect to the environment once the extra machines are running.
<jam> Mmike: third, after machine-0 has died, you probably need to run "juju ensure-availability" again, where it should now notice that you have a dead machine, and try to take it out of official API server status
<jam> (this is a bit of a wart, and we're looking at splitting up ensure-ha into more precise commands)
<jam> and then you can "juju destroy-machine machine-0" (sp?)
<jam> once it has lost its voting status
<Mmike> jam: i see
<Mmike> jam: let me try that
<natefinch> jam: good idea embedding the interface in the struct
<perrito666> morning all
<Mmike> jam: where can I find the password jujud is using to connect to mongod?
<jam> Mmike: you have to look on machine-0 for it, it should be in /var/lib/juju/agents/machine-0/agent.conf I think
<jam> Mmike: sorry, looks like "/var/lib/juju/agents/machine-0.conf"
<Mmike> jam: ack, thnx
<perrito666> nope /var/lib/juju/agents/machine-0/agent.conf
<perrito666> jam: unlest it changed
<jam> perrito666: that's what I originally thought, but our backups test disagree
<jam> perrito666: ./state/backups/files_test.go:114:              filepath.Join(s.root, "/var/lib/juju/agents/machine-0.conf"),
<Mmike> jam: actually on 1.22 it's in  /var/lib/juju/agents/machine-0/agent.conf
<Mmike> just verified
<perrito666> jam: ill remember to ping eric on that one
<jam> perrito666: k. That sounds suspiciously like its a bad test.
<jam> perrito666: but "cmds/juju-restore" does use /var/lib/juju/agents/machine-0/agent.conf
<perrito666> jam: it is
<perrito666> jam: looking at the code that it is testing it does not really matter since that seems the result of faking a glob result
<perrito666> but is missleading
<jam> perrito666: yeah, I'm not saying the test doesn't pass, but we shouldn't have incorrect paths in a backup test :)
<perrito666> I have opened and restored that files enought times to know it is named agent.conf :p
 * perrito666 makes a note for a less urgent time
<jam> perrito666: :)
<jam> yeah, I would guess you did
<dimitern> voidspace, ping
<jam> Mmike: did you get things to work?
<Mmike> jam: nope, not yet, my openstack deploy is a bit slow as I'm testing some other stuff too, so it takes time...
<jam> Mmike: np
<jam> just curious
<Mmike> jam: yup, have a case about that, so I need to know how it works
<Mmike> and for convenience too :)
<dimitern> fwereade, http://reviews.vapour.ws/r/1501/
<jam> dimitern: is it actually allowed to start with - or _
<jam> ?
<dimitern> jam, it is in fact
<dimitern> jam, rvba just confirmed that
<jam> dimitern: is there any reason not to be slightly more restrictive than them?
<jam> I think avoiding opening "-" is probably a good thing
<dimitern> jam, apart from another critical blocker from OIL
<dimitern> can't think of one :)
<dimitern> in case they now decide to have names like "-my_NET"
<fwereade> dimitern, jam: so the core of the problem is that we use provider ids as network names
<fwereade> dimitern, jam, which means we need to come up with a regexp that matches every possible valid provider network id
<dimitern> fwereade, jam, yeah, that's the crux of it; but it's much harder to fix it properly
<fwereade> dimitern, jam, but is still somehow useful
<dimitern> fwereade, jam, also, fwiw this only applies to maas anyway now
<fwereade> dimitern, jam: with that patch, can we still transform name<->tag safely?
<fwereade> dimitern, jam: and can we depend on correctly munging globallKeys?
<dimitern> fwereade, jam, yes - there are tests for that in fact, which didn't stop passing, and we don't allow /
<jam> or # right?
<dimitern> yeah
<Mmike> jam: so, here is what I did: juju bootstrap; juju ensure-availability. Then I did 'juju status' until I had all three machines with "state-server-member-status: has-vote". Then I deployed some service (percona-cluster, with 3 units).
<Mmike> Waited for that service to settle down, verified all is ok.
<Mmike> After that I killed the machine 0 (as this is openstack, I did nova delete $instance_id). Did juju status after that, waited for like 2-3 minutes, and then juju status returned status with machine0 being down.
<Mmike> Also, some percona-cluster units where shown as 'agent-state: down'.
<jam> Mmike: 2-3 minutes sounds odd to me
<jam> Mmike: one option is to use "juju status --verbose" (-v) which should have it report what its trying to do
<jam> is it still 2-3 min per status?
<jam> or is status fast now?
<Mmike> Then I did juju ensure-availability, juju instantiated another machine. Waited for that one to have 'state-server-member-status: has-vote'. Then did: "juju destroy-machine 0", but juju complained about 'machine 0 being needed for the env'
<Mmike> no, it is fast now
<Mmike> just the first run after I killed machine 0 was slow.
<Mmike> ok, now all of my percona-cluster units are up, and marked ok (no more 'agent-state: down').
<Mmike> But I still can't remove machine 0.
<jam> Mmike: can you pastebin the "juju status" output ?
<Mmike> sure
<Mmike> jam: http://pastebin.ubuntu.com/10924694/
<Mmike> jam: I can paste whole termlog from the first bootstrap, if needed
<jam> Mmike: so i *believe* that now that machine-0 has been removed from voting, you can do another "juju ensure-availability" and it will be removed as a state server entirely, and then you can "juju destroy-machine machine-0"
 * Mmike tries
<jam> I think each transition needs another ensure-availability call. starts with has-vote, then goes to no-vote, then goes to no-longer-a-state-server
<Mmike> jam: ack, confirmed.
<Mmike> jam: excellent! :)
<Mmike> jam: thank you very much
<Mmike> just one more, though
<Mmike> so let's say I had 3 state machines. One of them died. Then another died. Now mongodb on remaining unit is in readonly state.
<Mmike> I don't have juju backup.
<Mmike> I'm thinking, then, connecting to the remaining machine, and 'reseting' mongodb - forcing it to become primary again.
<Mmike> Will that work? That is, will I be able to use juju afterwards?
<Mmike> (Havent tried, just curious)
<jam> Mmike: so if you manually poke the replicaset document in mongo
<jam> you should be able to do anything you want :)
<jam> just don't do it wrong.
<jam> obviously with db surgery you can make anything work. I'm not 100% sure how much Juju would let you get away with, because it will likely try to write the replicaset document to match its own state
<Mmike> well, obviously the 'right thing to do' is to have backups.
<Mmike> But, there being only two kinds of people in the world... sooner or later I'll have to deal with 'we WILL be doing backups' kind of people :)
<Mmike> jam: thnx for the inputs, will try breaking my env later again to verify if poking with mongo will bring it back to life
<jam> Mmike: have you verified that Juju becomes completely unhappy with 2 machines down?
<jam> It is *possible* that "juju ensure-availability" would rewrite things to try and get back into 3-man mode.
<jam> but I can entirely believe that we didn't get there.
<jam> And IIRC, we don't support "juju ensure-availability -n1" to get back out of HA mode
<Mmike> jam: nop, tbh. But I know from previous mongodb experience that if two nodes go down the last remaining node is in read-only, as it has no quorum.
<jam> Mmike: but being able to get a DB backup at that point seems *really useful*
 * perrito666 uses deployer for the first time ever
<katco> natefinch: standup
<aznashwan> natefinch: ping
<natefinch> aznashwan: sup?
<aznashwan> natefinch: sorros
<aznashwan> natefinch: got distracted
<aznashwan> natefinch: could give me an idiot's guide to using that perl script for syscalls on windows?
<aznashwan> natefinch: (if you could exemplify on your npipe package what the result should be; that would be awesome)
 * perrito666 drops a bucket of debug messages onto deployer
<natefinch> aznashwan: all I really did was look at the files in the syscall stdlib and copy what they did there.  The zsyscall_* files give the command line to use, and show the input file, which contains the patterns that generate the syscalls in comments at the top of the file.
<natefinch> aznashwan: I meant to write a blog post about how to do it back when I wrote npipe, but never got around to it. I should brush up on it and do so, it would be pretty useful, I think.
<voidspace> dimitern: just FYI, I merged the tests as well
<voidspace> dimitern: that was easier
<voidspace> dimitern: I have a bunch of failing tests to fix now
<voidspace> dimitern: not all obvious, but I'll work through them
<dimitern> voidspace, sweet!
<voidspace> dimitern: hey, we have a test "TestNewCloudInitConfigNoFeatureFlag"
<voidspace> dimitern: testing maas config generation
<voidspace> https://github.com/juju/juju/compare/master...voidspace:addressable-featureflag
<mattyw> jam, are you around this week?
<voidspace> (the first s.SetFeatureFlags() in that branch is spurious I assume - and I've removed it locally)
<voidspace> dimitern: the intent of the test seems to be to test we get the same config with feature flag on and off
<voidspace> dimitern: but we don't
<voidspace> dimitern: we get the juju-br0 stuff when the flag is off
<jam> mattyw: I'm around
<voidspace> dimitern: so the test seems flawed (and is failing for this reason)
<jam> though its EOD for me now
<mgz_> dimitern: I failed to find you in person, but can I have a review please? :)
<mgz_> dimitern: https://github.com/go-goose/goose/pull/8/files
<mup> Bug #1449613 was opened: service/windows is missing unit tests. <juju-core:New> <juju-core 1.23:New> <juju-core 1.24:New> <https://launchpad.net/bugs/1449613>
<mgz_> katco: can I wave bug 1447841 at you?
<mup> Bug #1447841: eu-central-1 AWS region V4 signing required and not supported <ec2-provider> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1447841>
<katco> mgz_: sure tal
<katco> mgz_: should be an easy fix
<natefinch> perrito666: review please? http://reviews.vapour.ws/r/1504/
<mup> Bug #1449617 was opened: service.Service implementations are missing functional tests. <juju-core:New> <juju-core 1.23:New> <juju-core 1.24:New> <https://launchpad.net/bugs/1449617>
<perrito666> natefinch: reviewed, just one issue
<dimitern> vo
<katco> mgz_: very small review for that bug: http://reviews.vapour.ws/r/1505/
<dimitern> voidspace, hey
<mgz_> pants, we have in fact not yet switched to gopkg.in goose yet
<dimitern> voidspace, why is that happening can be gnarlly - check basically jujuconnsuite's setupsuite and setuptest - there might be something overriding the set FF
<mgz_> katco: change lgtm - no tests need updating? also, s3 auth works with the same auth object as ec2 okay?
<mgz_> I'm confident our testing will fail fast if anything is wrong
<katco> mgz_: yeah s3 defaults to v4, we deprecated v2 signing for s3
<katco> mgz_: i haven't run tests, let me do that rq for just the ec2 provider package
<katco> mgz_: those tests pass. i'll try and land and see what happens
<alexisb> ericsnow, as always thank you!
<mup> Bug #1449633 was opened: Cannot terminate/remove broken state server after ensure-availability <cts> <juju-core:New> <https://launchpad.net/bugs/1449633>
<mgz_> should we be sorting external import alphanum-wise? what can I use to do that?
<mgz_> hm, seems not, I have github.com and gopkg.in both ways round
<natefinch> mgz_: go fmt will alphabetize
<natefinch> mgz_: it sorts each section independently (where a section is delimited by blank lines)
<natefinch> perrito666: thanks
<perrito666> np
<mup> Bug #1449633 changed: Cannot terminate/remove broken state server after ensure-availability <cts> <juju-core:New> <https://launchpad.net/bugs/1449633>
<mgz_> natefinch: yeah, I was wondering about sorting the blocks after that - which we seem not to do
<natefinch> mgz_: go fmt definitely sorts all the blocks... try it out here:  http://play.golang.org/p/lb4Nc0Nz2B  the blocks are intentionally out of order to start
<mup> Bug #1449633 was opened: Cannot terminate/remove broken state server after ensure-availability <cts> <juju-core:New> <https://launchpad.net/bugs/1449633>
<katco> perrito666: hey do you need any help with 1441826?
<perrito666> katco: I think I just nailed it :)
<katco> perrito666: course you did... bc you're AWESOME ;p
<natefinch> perrito666: nice
<perrito666> I am writing the patch now to see if it works, but apparently its a shim missing in the multiwatcher status
<mgz_> ocr: can I request another review on http://reviews.vapour.ws/r/1473/
<mgz_> I needed to bump goose version (and location) to add a test)
<perrito666> mm, that is a nasty thing to read on reviewboard
<mgz_> perrito666: blame Ian
<perrito666> ok
<perrito666> git blame ian
<mgz_> it was a nice clean change till I had to update all the imports
 * perrito666 impatiently awaits for his stream to update
 * perrito666 reads in github and gets a pill for the headache
<mgz_> perrito666: looking at each commit in turn may be more informative
<perrito666> mgz_: do you arrange your tests in lexicographic order?
<mgz_> perrito666: I arrange them in random walk order
<perrito666> mgz_: brb, will review upon return,  I promise
<mgz_> perrito666: no problem
<mgz_> I'm off for now at least
<aznashwan> natefinch: sorry; got carried away again...
<natefinch> aznashwan: did you see my prior responses?
<aznashwan> natefinch: figured it out; it's quite handy
<aznashwan> natefinch: yes I did
<natefinch> aznashwan: ahh cool. Yeah. it's nice.
<aznashwan> natefinch: I used the go one in the syscall package
<aznashwan> natefinch: they should really raise awareness for it
<natefinch> aznashwan: yeah, I remembered they added a go version finally... after I had to struggle with installing perl on windows ;)
<aznashwan> natefinch: oh; it's yours?
<aznashwan> natefinch: Windows salutes you :D
<natefinch> aznashwan: no no... I was sorely tempted to port it, but never found the time.  Not sure who did, but not me :)
<aznashwan> natefinch: works great; but there was a lot a lot of guesswork involved in figuring out how to use it for procs' that weren't in kernel32
<katco> perrito666: how is the bug coming?
<perrito666> katco: fixed It looks, fixing the tests now
<katco> perrito666: awesome :)
<perrito666> mmpf I am pretty sure network manager indicator has less functions than before :( I dont seem to be able to priorize my connections
<perrito666> talk about useful test failure errors
<perrito666> Error: entity mismatch; got len 1; want 1
<natefinch> perrito666: heh wow
<katco> natefinch: hey is https://bugs.launchpad.net/juju-core/+bug/1446871 landed yet?
<mup> Bug #1446871: Unit hooks fail on windows if PATH is uppercase <ci> <hooks> <windows> <juju-core:In Progress by natefinch> <juju-core 1.24:In Progress by natefinch> <https://launchpad.net/bugs/1446871>
<natefinch> katco: I hadn't thought it was targetted at 1.24-alpha1, so I just have a PR up against master, which is blocked... but I could easily put it on 1.24 instead
<natefinch> katco: so, no :)
<katco> Nate looks like it's just targeted against 1.24
<katco> natefinch: mgz was working on https://bugs.launchpad.net/juju-core/1.24/+bug/1441826 but it's past his EoD... we're blocked until that is complete
<mup> Bug #1441826: deployer and quickstart are broken in 1.24-alpha1 <api> <blocker> <ci> <deployer> <quickstart> <regression> <juju-ci-tools:Triaged> <juju-core:Triaged by hduran-8> <juju-core 1.24:In Progress by hduran-8> <https://launchpad.net/bugs/1441826>
<katco> natefinch: looks like you might have had some insight there, so if you are blocked, it would be a good thing to look into
<natefinch> katco: I can certainly look at that one, though I thought perrito666 had that one handled
<katco> natefinch: i thought he was looking at https://bugs.launchpad.net/juju-core/+bug/1441826?
<mup> Bug #1441826: deployer and quickstart are broken in 1.24-alpha1 <api> <blocker> <ci> <deployer> <quickstart> <regression> <juju-ci-tools:Triaged> <juju-core:Triaged by hduran-8> <juju-core 1.24:In Progress by hduran-8> <https://launchpad.net/bugs/1441826>
<katco> gah... stupid copy/paste mistake
<perrito666> katco: you have a problem with your pastebin?
<katco> haha
<perrito666> you sound just like the error I pasted a moment ago
<katco> natefinch: i meant https://bugs.launchpad.net/juju-core/+bug/1440940
<mup> Bug #1440940: xml/marshal.go:10:2: cannot find package "encoding" <blocker> <ci> <regression> <test-failure> <juju-core:In Progress by gz> <juju-core 1.24:Triaged> <juju-release-tools:In Progress by gz> <https://launchpad.net/bugs/1440940>
<natefinch> oh yeah that one
<katco> natefinch: perrito666 is definitely working the other one
<katco> natefinch: but mgz thought he had a fix, but apparently not
<natefinch> katco: I'll look through what has been posted there since I last looked at the bug.
<katco> natefinch: cool ty
<katco> wwitzel3: are you blocked on 1326091?
<natefinch> sinzui: you around?
<sinzui> I am
<natefinch> in the above bug about encoding, one of your recent messages contains a typo that makes it hard for me to understand: "The release tarfile but contain the encoding src or pkg, or we avoid using github.com/juju/govmomi/vim25/xml."
<sinzui> natefinch, *must*
<natefinch> sinzui: ahh, ok, I couldn't figure out what it was supposed to be
<sinzui> natefinch, mgz advised that including the encoding package in the source tree would ensure gccgo found the package. He reported it was fixed but is not. The packages don't build and I don't see the package, so I don't know what mgz changed to lead him to believe the issue was fixed
<sinzui> damn, too many uses of the word packages
<sinzui> We cannot make debs, and I don't see "encoding" in src/
<natefinch> sinzui: it seems odd to me that we package up the standard library in our tarfile. shouldn't the standard library just be installed on the target machine, and we just tar-up what's outside the stdlib?  I presume ubuntu doesn't build other languages by including the stdlib in the tar we give the builders.
<natefinch> (and I'm using a very broad version of "installed" .... I don't actually care how it arrives on disk)
<sinzui> natefinch, it seem odd to me we would fork xml and use it without checking it worked on all arcs
<sinzui> regardless, *we* are obligated to deliver a tarball that Ubuntu and Lp can build ppc64el debs. so need to provide a version of encoding that work with that xml package
<natefinch> sinzui: I guess my question would be...  why isn't the entire go std library on the builder?  Is it because there's no golang package we can apt-get install for PPC64el that we don't do it that way?
<sinzui> natefinch, Ubuntu/debian uses shared lib and they are already built and distributed for Juju to link too
<sinzui> natefinch, juju is *not* 100% static for ppc. it uses golib5
<natefinch> sinzui: I don't know what that is, unfortunately.
<sinzui> natefinch, a lib that doesn't have encoding in it, at least, not by that name
<natefinch> sinzui: where do we get golib from?  google isn't being very helpful for me there
<sinzui> well I can see there is an encoding, but it doesn't match what the xml package wants :(
<sinzui> natefinch, I think you are taking the wrong path here
<sinzui> natefinch, We will not release any 1.24 version in the next month if we are trying to bet debian/ubuntu to change packages. mgz's solution is to include just what is needed to satisfy debian packaging rules
<natefinch> sinzui: we're getting an error that should be impossible.  The most likely reason is because we're doing things in weird ways to satisfy people's ideas of the way things should be built.  I'm just trying to figure out how this builder differs from what happens when I run gccgo.
<natefinch> sinzui: if I can't look at golib, I can't figure out why its encoding is different
<natefinch> sinzui: or how to fix it
<natefinch> (it/us/whatever)
<sinzui> natefinch, https://launchpad.net/ubuntu/+source/gccgo-go
<sinzui> natefinch, I think trusty used http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/gccgo-go/trusty-proposed/files
<sinzui> natefinch, but you are entering into packaging and linking in debian/gccgo which is not like golang
<natefinch> sinzui: so, I do see the encoding directory and encoding.go under src/pkg: http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/trusty/gccgo-go/trusty-proposed/files/head:/src/pkg/encoding/
<sinzui> natefinch, yes, we established that a few weeks ago
<sinzui> natefinch, the recent change here is that go build to make packages was working, now it doesn't So the hack to create a goroot on the test machines wont work in the clean-room env used by builders
<natefinch> sinzui: I'm more than willing to help, and I'm sorry if I get frustrated.  There's a lot of the packaging part, especially for gccgo, that I still don't know well, so trying to figure out why something is going wrong is difficult without preloading a lot of knowledge.
<katco> natefinch: sinzui: it seems extraordinarily hazardous to me that we would patch the go stdlib to include an entire package
<katco> natefinch: sinzui: seems like we could get unexpected behavior quite easily, doesn't it?
<natefinch> katco: we forked the xml package.  I would not say we were patching the std lib at all.   we copied the xml package to github.com/juju/xml and reference it as a normal 3rd party package
<natefinch> katco: the problem seems to be that we're using any package that itself depends on the encoding package.
<katco> natefinch: that doesn't have a dependency graph that spiders out into other parts of the go stdlib?
<natefinch> katco: it should be exactly the same as if we just imported the xml package
<sinzui> natefinch, I found 3 changes between the the time we could make packages and when we couldn't mgz made two of them https://bugs.launchpad.net/juju-core/+bug/1440940/comments/15
<mup> Bug #1440940: xml/marshal.go:10:2: cannot find package "encoding" <blocker> <ci> <regression> <test-failure> <juju-core:In Progress by gz> <juju-core 1.24:Triaged> <juju-release-tools:In Progress by gz> <https://launchpad.net/bugs/1440940>
<katco> natefinch: well, what i'm getting at is we're using version X of the xml package with version Y of the rest of the go stdlib
<katco> natefinch: and i'm wondering if that might introduce unintended consequences
<natefinch> katco: it shouldn't matter, because the only code using our XML package is code we specifically wrote to use it.  It should either compile on all platforms or not.
<katco> natefinch: compilation =/= proof of correctness
<natefinch> katco: but if the problem is a compilation error....
<katco> natefinch: this particular problem; i'm questioning the larger strategy
<sinzui> natefinch, the xml package *did* compile a week ago. When only tests were broken I assumed it was test-double nonsense from gccgo
<natefinch> katco: it's really conceptually no different than forking any other package and using it instead of the original.  the stdlib isn't special, it's just normal go code.
<katco> natefinch: correct, but the version of the xml package we've imported was written to depend on perhaps a different version of the go stdlib
<katco> natefinch: what i'm getting at is: we're mucking with dependencies in a very non sustainable way
<natefinch> katco: there's no version of the stdlib that doesn't have the encoding package, though
<natefinch> katco: also, the stdlib is guaranteed backwards compatible to go1.0 and they're very strict about it
<natefinch> I think looking at sinzui's list of commits that might have broken it is a good way to tackle the problem.  IN theory, something should stand out.
<katco> natefinch: from a signature perspective... not implementation
<katco> natefinch: this line of questioning is probably irrelevant at this point, i'll drop it
<natefinch> sinzui: is it possible to re-run the last test that worked?  To rule out environmental / process changes?
<katco> natefinch: but wow will this get messy fast
<natefinch> sinzui, katco: unfortunately, I'm at EOD.  I may be able to get online significantly later (9pm-ish eastern), but may not, depending on the state of my kids.
<katco> natefinch: hope everyone feels better, and hope the appt. went well today
<natefinch> I think it's worth going through the code changes to look for things that stand out ,and also doing a double check to make sure that the last run that worked still works.
<natefinch> katco: thanks
<sinzui> natefinch, not in CI since the job is publishing. But in this case, this is the actual packaging rules. anyone can take the  release tarball, make a source package from the packaging branch, and then build it (on ppc
<sinzui> )
<sinzui> natefinch, note that building happens in fakeroot, so changes cannot just happen
<natefinch> *nod*
<natefinch> ok, gotta run, sorry.  Will try to be back on later.  I'll take a look at the changelists and possibly the tarballs themselves to see if I can figure out what's going on.
<natefinch> oh, one hack to fix it might just be to do an unnecessary import "encoding"  from somewhere under github.com/juju/juju so we force that it gets included in the tarball
<wwitzel3> katco: no, it isn't blocked
<katco> wwitzel3: k ty
<menn0> wallyworld: ping?
<mup> Bug #1449277 changed: juju environment create fails on aws: invalid config <juju-core:Invalid by waigani> <juju-core 1.24:Invalid by waigani> <https://launchpad.net/bugs/1449277>
<perrito666> so http://reviews.vapour.ws/r/1508/ and http://reviews.vapour.ws/r/1509/ Fix lp1441826 in 1.24 and master respectively, who would be so kind? wallyworld ?
<perrito666> katco: ?
<perrito666> ill go get some dinner and then come back
<ericsnow> could I get a review on http://reviews.vapour.ws/r/1510/?
<ericsnow> it's the fix we talked about on the call (build constraints for vsphere provider)
<ericsnow> it should be super easy to review
<perrito666> famous last words
<wallyworld> ericsnow: looks like there's additional unrelated changes in that diff
<ericsnow> wallyworld: ah, that's just RB doing its thing (I'll fix)
<ericsnow> fixed
<wallyworld> looking again
<wallyworld> ericsnow: what does the newly added init_gccgo do?
<ericsnow> wallyworld: makes it so I don't have to touch provider/all/all.go :)
<ericsnow> wallyworld: (allows import of "github.com/juju/juju/provider/vsphere")
<ericsnow> wallyworld: I'll add a note to that effect
<ericsnow> wallyworld: good to go? (I'm going AFK in a minute)
<wallyworld> ericsnow: bah, i got disconnected again, i asked a quesion - what does the newly added init_gccgo do?
<ericsnow> wallyworld: yep, and I responded :)  I've added a comment explaining (allows imports of the package to continue to work)
<wallyworld> sorry, didn't see
<ericsnow> np :)
<wallyworld> ericsnow: lgtm, ty
<ericsnow> wallyworld: thanks
#juju-dev 2015-04-29
<menn0> perrito666: review done. just some small suggestions. wallyworld should probably take a look too since he's worked on status recently.
<ericsnow> wallyworld: I've gotta run; could you shepherd those merges?
<ericsnow> (already kicked off for 1.24 and master)
<perrito666> tx menn0 and waigani
<perrito666> wallyworld:
<wallyworld> yo
<perrito666> wallyworld: was a thank you, but your connection problems drive my irc client mad
<wallyworld> me too :-(
<wallyworld> it just appears to be irc
<wallyworld> can still refresh browser etc ok when irc disconnects
<perrito666> wallyworld: it is your client acting on too much lag
<perrito666> your lost connection thresold is too small
<wallyworld> i'l check client config
<perrito666> wallyworld: in the mean time, could you clarify your first comment?
<wallyworld> defer the legacy status calculation until after agent and workload status are set in the block below
<wallyworld> then the special logic in the new method probably is not required
<wallyworld> just call translate directly with the agent and workload status values
<perrito666> duh, ok makes sense and then if the result is error I set workload status to error, did I get it right?
<perrito666> that function should have been deleted I fisrt tried a more complex solution, because I like to complicate my life
<waigani> wallyworld: thanks for review. I'm new to control-bucket. I was just taking the error on face value - It looked like a string that needed to be zero valued to "" instead of nil. Any tips on where to look to grok control-bucket and what it's default value should be?
<perrito666> waigani: 1.18?
<waigani> perrito666: okay
<wallyworld> axw: revision 6 of http://reviews.vapour.ws/r/1481/ has the machine manager facade
<wallyworld> axw: there's currently work being done to tweak the control bucket logic for ec2, but with tools and charms in storage, we don't need control buckets for new environments anymore. i'm wondering if we shouldn't just delete control bucket creation
<axw> wallyworld: yes we do, we still need it for the state server list
<wallyworld> ah, yeah, forgot about that
<natefinch> sinzui: you around?
<mup> Bug #1449822 was opened: storage: storage-detached should be storage-detaching <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449822>
<mup> Bug #1449822 changed: storage: storage-detached should be storage-detaching <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449822>
<mup> Bug #1449822 was opened: storage: storage-detached should be storage-detaching <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1449822>
<axw> wallyworld: have you tested your latest change to ff removal against an older juju API server?
<wallyworld> axw: yes
<axw> wallyworld: cool. I thought getting the API would error, rather than returning something with a 0 "best API version"
<axw> glad to be wrong
<menn0> wallyworld: looking at the blockdevices upgrade PR now
<wallyworld> ty
<menn0> wallyworld: ship it with some fixes
<mgz> mornin'
<TheMue> morning o/
<voidspace> dooferlad: ping
<dooferlad> voidspace: hi
<voidspace> dooferlad: hi, you better today?
<TheMue> ah, he's back. feeling better?
<voidspace> dooferlad: I assume the answer is yes as you're here :-)
<dooferlad> voidspace: mostly :-|
<dooferlad> voidspace, TheMue: hangout time!
<TheMue> omw
<voidspace> omw
<Bardhi|2> hello i am trying to deploy some services with juju but it gets stucked on allocating units and services are not deployed. i just started using juju and i am not an expert in using it and i dont know very well how it all works but i need this service up and running as soon as possible. if someone could help me i would appreciate it. thank you
<axw> wallyworld: should cinder be going into 1.24?
<axw> (since it's not in tree atm)
<axw> <axw> wallyworld: should cinder be going into 1.24?
<axw> <axw> (since it's not in tree atm)
<wallyworld> axw: yes, we do need it
<axw> wallyworld: ok, will look at getting that ready for merge next
<wallyworld> axw: excellent, ty
<perrito666> wallyworld: got it, fixing that last bit now
<wallyworld> perrito666: ty
<perrito666> a night's sleep does wonders
<wallyworld> indeed
<perrito666> s/a night/6 hs/
<jam> rogpeppe: so ultimately, I feel like if we are "crossing the streams" then we have our layering and abstractions a bit wrong. Like I don't see why API Parameters would be defined in terms of model objects (params depending on charms)
<jam> however, I do feel like your proposal is better than what we have
<rogpeppe> jam: i think that consideration is largely orthogonal to the decisions we're making here
<jam> rogpeppe: it would let you decouple "params" from "charms" which would let you remove some of the crossing lines
<rogpeppe> jam: even if params didn't depend on the charm package, we'd still need to solve the cyclic issue (params has never been the problem here)
<perrito666> mgz: morning, ping?
 * perrito666 would give his kingdom for more upload speed
<perrito666> s/speed/bw
<rogpeppe> quick question: is relation-broken the only relation hook for which $JUJU_REMOTE_UNIT is not guaranteed to be set?
<rogpeppe> fwereade: ^
<mgz> perrito666: yo
<perrito666> mgz: do you know if this happens going from any contiguous a to b from 1.22 on? https://bugs.launchpad.net/juju-core/+bug/1447853
<mup> Bug #1447853: Local charms are not added to storage on upgrade to 1.22.x <charms> <regression> <storage> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:Triaged by hduran-8> <https://launchpad.net/bugs/1447853>
<mgz> perrito666: it won't happen if you bootstrapped on 1.22 because your charms will go in mongo straight off
<mgz> perrito666: it will be an issue if you started on 1.21 or earlier regardless of how you're trying to upgrade
<perrito666> ah ok, then retrying with 1.21
<mup> Bug #1441811 changed: juju-1.23beta3 breaks glance <-> mysql relation when glance is hosted in a container <charms> <network> <oil> <regression> <juju-core:Invalid by dooferlad> <juju-core 1.23:Fix Released by dooferlad> <https://launchpad.net/bugs/1441811>
<jam> wallyworld:  or axw: I'm looking at https://bugs.launchpad.net/juju-core/+bug/1410876 and it seems to be trying to download a root.tar.gz from a Juju API server.
<mup> Bug #1410876: Error executing lxc-clone: lxc_container: utils.c: mkdir_p 220 Not a directory - Could not destroy  snapshot %s - failed to allocate a pty; Insufficent privileges to control  juju-trusty-lxc-template <lxc> <oil> <stakeholder-critical> <trusty> <juju-core:Triaged> <juju-core
<mup> 1.24:Triaged> <https://launchpad.net/bugs/1410876>
<wallyworld> jam: that looks like a missing cloud images package
<wallyworld> let me check the name
<jam> wallyworld: 500 Internal Server Error is not a good thing to be getting.
<jam> though they also couldn't include the all-machines.log because they were trying to "cat" as an unpriviledged user.
<wallyworld> jam: cloud-image-utils needs to be installed or else the util to determine the lxc file name is missing
<wallyworld> jam: if they have os-updates false, then juju has a bug it wont install needed packages
<wallyworld> that bug is being fixed for 1.24
<jam> wallyworld: what file are you looking at? I'm looking at "juju_status.yaml" and trying to parse through the agent-state-info error message.
<wallyworld> jam: i just happen to know/guess the issue
<jam> wallyworld: so regardless, do we give a better error than 500 ISE if you try to download a cloud image we don't have?
<wallyworld> if it tries to download just root.tar.gz and not a full lxc image file name, the util is missing
<jam> wallyworld: its a full name
<jam> wallyworld: https://10.245.0.177:17070/environment/97640150-37d0-4356-80b1-095a39c78437/images/lxc/trusty/amd64/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz;
<wallyworld> oh i see
<jam> that's just a lot to write :)
<wallyworld> in that case, ignore what i said above
<jam> wallyworld: but that looks like it is expecting the Juju API server to have the cloud images, right?
<jam> Is that the changes you made with GridFS storage?
<wallyworld> jam: the state server is a proxy - if it doesn't have the iages it fetches from cloudimages.ubuntu.com and caches
<jam> console.txt claims it is running juju-1.22-beta2-trusty-amd64.tgz
<jam> wallyworld: k, well its giving 500 ISE instead :)
<wallyworld> the images are cached in gridfs yes
<wallyworld> the logs would be useful
<jam> wallyworld: and we don't have all-machines.log or machine-0.log to know if we're logging a real error internally. (like not being able to access cloud-images, or somesuch.)
<jam> wallyworld: they tried, but those files are: cat: /var/log/juju/all-machines.log: Permission denied
<wallyworld> sudo ?
<wallyworld> for local provider you need sudo for most things
<jam> wallyworld: + juju ssh -e maas 0 'cat /var/log/juju/all-machines.log'
<wallyworld> oh maas
<jam> Is the "-e maas" to blame ?
<wallyworld> don't think so, but is maas is current env, could try leaving it off
<wallyworld> juju ssh i think supports -e
<wallyworld> jam: the lxc-clone error - that nromally happens after the image has been downloaded (via the state server). is there a tarball in /var/cache/lxc/trusty... ?
<wallyworld> on the maas node
<jam> wallyworld: so wget for 1/lxc/0 is failing
<jam> afaict there are no containers on machine-0
<jam> so we don't see errors particularly there
<wallyworld> wget to state server or from state server out to cloudimages?
<jam> wallyworld: wget 10.245.0.177:17070 I assume that is to state server
<wallyworld> yes
<wallyworld> wget is written as a bash file to tmp in order to specofy cert to use
<jam> wallyworld: yeah, there is stuff about tmp files, but the requset
<jam> the request *to* the state server causes the state server to return 500 ISE
<jam> and "juju ssh machine-0" is unable to read /var/log/juju/all-machines.log for debugging info
<wallyworld> so it seems the secure wget is succeeding, but then the state server fails to reach out to cloudimages to fetch the tarball from there to 1. cache it, and 2. stream it back to wegt
<wallyworld> we really need the log files, preferrable with debug on
<wallyworld> what does juju ssh ls /var/log/juju give?
<wallyworld> is all-machines.log missing
<jam> wallyworld: the message they got was "Permission Denied" not "no such file"
<wallyworld> jam: that could be another issue then - if rsyslog pkg is missing, all-machines log won't be written
<wallyworld> rsyslog-gnutls
<wallyworld> that package is normally installed by cloud init, but not if os-updates-blah is false
<wallyworld> that's the bug i mentioned earlier
<natefinch> does $$JFDI$$ not work anymore?  Or did I just mess it up?
<perrito666> natefinch: aha, trying to jfdi
<mup> Bug #1440940 changed: xml/marshal.go:10:2: cannot find package "encoding" <blocker> <ci> <regression> <test-failure> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.24:Fix Released by ericsnowcurrently> <juju-release-tools:In Progress by gz> <https://launchpad.net/bugs/1440940>
<natefinch> ericsnow_afk: you around?
<natefinch> perrito666: looks like it's $$__JDFI__$$  (but then the markup changes that to a bold JFDI)
<voidspace> ericsnow_afk: ping
<voidspace> natefinch: you know anything about systemd?
<voidspace> natefinch: anything would be more than me... :-)
<natefinch> voidspace: LOL   I probably know less than that
<voidspace> natefinch: :-)
<voidspace> natefinch: looks like I'm stuck with hassling ericsnow ...
<ericsnow> :)
<natefinch> voidspace: well, I know some of our handling of it.  I just don't know systemd *itself*
<natefinch> voidspace: if that makes any sense :)
<voidspace> natefinch: right
<voidspace> natefinch: the question is about our handling of it, just formulating the question
<ericsnow> voidspace: what's up?
<voidspace> ericsnow: I'm forward porting a networking fix from 1.23 to master (and 1.24)
<voidspace> ericsnow: I now have a test that fails with this error
<voidspace> http://pastebin.ubuntu.com/10940325/
<voidspace> ericsnow: which looks like a systemd change
<mgz> I have a fix for bug 1446264 in need of kindly review
<mup> Bug #1446264: joyent machines get stuck in provisioning <bootstrap> <joyent-provider> <reliability> <repeatability> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1446264>
<voidspace> ericsnow: are the two "effectively equivalent", i.e. can I just change the expected value
<natefinch> voidspace: lol, I read that as "port forwarding a networking fix"
<mgz> sinzui: ^ you may like this
<voidspace> heh
<voidspace> ericsnow: the test specifically is TestShutdownInitCommandsSystemd
<sinzui> mgz, \o/
<voidspace> hmmmm....
<voidspace> digging into the test it looks a bit weirder
<ericsnow> voidspace: presumably that test is checking the generated script
<voidspace> right
<voidspace> I need to get it to show the whole script
<mgz> ocr: <http://reviews.vapour.ws/r/1519/>
<ericsnow> voidspace: got it
<mgz> wallyworld: did you get my email btw?
<voidspace> ericsnow: this is the full result
<voidspace> http://paste.ubuntu.com/10940405/
<voidspace> it looks reasonable
<voidspace> the test is explicitly checking for a chmod though
<ericsnow> voidspace: which test?
<voidspace> container_userdata_test.go
<voidspace> cloudconfig/containerinit/container_userdata_test.go TestShutdownInitCommandsSystemd
<natefinch> mgz: ship it!
<mgz> natefinch: merci
<ericsnow> voidspace: OTP, now
<voidspace> ericsnow: np, I'm reading through it
<voidspace> ericsnow: it might be a genuine failure
<voidspace> yeah, it looks like there's definitely a chmod missing that the test expects, trying to work out why :-)
<voidspace> ericsnow: this is service/systemd/testing/writeconf.go that is expecting a "chmod 0755 " + wct.scriptname()
<TheMue> strange, router just went into service mode and back
<voidspace> ericsnow: for the systemd writeconf tests, you don't know where the "chmod 0755" is supposed to be added do you?
<ericsnow> voidspace: check out service/systemd/conf.go
<ericsnow> voidspace: (in normalize)
<voidspace> ericsnow: ah, I was grepping for chmod... that helps, thanks
<ericsnow> voidspace: np
<voidspace> ericsnow: hmm... normalize does a 600 though, not an 0755
<voidspace> that's on the logfile
<ericsnow> voidspace: ah
<ericsnow> voidspace: see service/systemd/service.go
<ericsnow> voidspace: (in InstallCommands)
<voidspace> ericsnow: indeed
<voidspace> ericsnow: thank you
<ericsnow> voidspace: that bit is triggered when you end up with a complex command
<voidspace> now to work out why that isn't being called in this code path
<voidspace> it is now *not* being called
<voidspace> and it's hard to see why this branch should change that
<ericsnow> voidspace: I'm guessing that the command was simplified
<TheMue> voidspace: a short review of http://reviews.vapour.ws/r/1521/? it's a forward port of a fix done by Dimiter
<voidspace> the execStart of the shutdown commands is different
<ericsnow> voidspace: exactly
<voidspace> TheMue: looking
<voidspace> TheMue: did you have to make any changes?
<ericsnow> voidspace: if the command is complex, it gets written out to a script and the path to the script is set to ExecStart
<voidspace> ericsnow: right
<voidspace> ericsnow: but the test should handle that
<voidspace> ericsnow: it has the same if as the InstallCommand branch
<voidspace> ericsnow: for some reason they're differing
<voidspace> TheMue: the way reviewboard renders the diff of that PR is awful :-)
<TheMue> voidspace: one moment, hangout
<ericsnow> voidspace: what does the equivalent pre-patch output look like? (equivalent  to http://paste.ubuntu.com/10940405/)
<voidspace> ericsnow: I'll check shortly, doing a review for TheMue currently
<voidspace> ericsnow: but good thing to check
<voidspace> ericsnow: and then work out what's changed
<ericsnow> voidspace: k
<ericsnow> voidspace: yep
<voidspace> ericsnow: you've given me enough clues to actually understand the code now, which is what I lacked earlier
<voidspace> ericsnow: I may still hassle you yet though... :-)
<ericsnow> voidspace: np :)
<TheMue> voidspace: so, back again. no, no changes, only a one to one port of Dimiters changes. but two files have moved
<TheMue> voidspace: oh, just read dimiters comment, have to check the commit again. hmm
<mup> Bug #1450092 was opened: juju 1.23 fails to bootstrap with upstart <juju-core:New> <https://launchpad.net/bugs/1450092>
<TheMue> voidspace: this is the right one http://reviews.vapour.ws/r/1522/
<TheMue> voidspace: chosen master instead of 1.24 accidently
<TheMue> voidspace: now it's for the correct branch
<voidspace> TheMue: ok, I've been reading it on github anyway
<TheMue> voidspace: thx ;)
<TheMue> voidspace: https://github.com/juju/juju/pull/2161
<sinzui> natefinch, Is this like the gccfo import encoding issue? https://github.com/golang/go/issues/10173
<sinzui> ^ that uses the newer compiler that Ubuntu offered to backport
<natefinch> sinzui: looking
<mgz> sinzui: looks like something else
<mgz> natefinch: the issue with your test is it's totally bogus
<mgz> you have an amd64 machine with both gccgo and golang-go installed on it
<mgz> which we know works
<mgz> it's a ppc machine with only gccgo on it that fails
<katco> mgz: i think we were wondering if you could run that sample program on the ci machine
<mgz> katco: I did, it fails
<katco> mgz: to simplify the conversation
<natefinch> mgz: that's sort of my point.... the *code* is valid.  There's a problem with the environment in which the code is run on that machine.
<mgz> yeah, which is called 'gccgo on ppc64el'
<katco> mgz: so can we take the next step and determine whether this is a CI env. issue, or a gccgo issue?
<katco> mgz: e.g. is there a way to run that sample program on a ppc machine with gccgo installed as normal?
<mgz> I don't have a fresh ppc machine to try, but I can try to find one
<natefinch> mgz: I find it hard to believe that gccgo on powerpc, when properly set up, simply lacks the encoding package
<sinzui> mgz, natefinch I am reading all the control files for gccgo-based packages. I don't see anything different from our package
<voidspace> TheMue: LGTM
<TheMue> voidspace: thx
<alexisb> wwitzel3, ping
<mgz> i'm out
 * katco looks dangerously at her old iBook aging in the corner
<TheMue> katco: new tasks for old (i)books?
<natefinch> encoding/json imports the encoding package.... and we import json in juju, so there's definitely some code that is referencing "encoding" indirectly already... but there is notably no other code importing "encoding" directly.
<katco> TheMue: yeah... it makes me not feel so bad about having so many old computers around :p
<TheMue> katco: hehe
<TheMue> katco: but the ibook already has been intel, hasn't it? the powerbooks have been G4
<katco> TheMue: i think it's a G3, but it was definitely an iBook circa 2004
<natefinch> katco: compiling juju on that is going to be a blast, I'm sure.
<TheMue> katco: oh, long time ago
<katco> natefinch: hey, go compiles fast, right? :)
<katco> natefinch: it's not going to happen anytime soon. i'm not even sure if it boots anymore. weekend project for sometime
<perrito666> sinzui: do you know why my http://reports.vapour.ws/releases/2576 is empty?
<sinzui> perrito666, the site can be 30 minutes behind the data that was stored for it to pick up
<wwitzel3> alexisb: pong
<alexisb> wwitzel3, I got what I needed thank you
<perrito666> sinzui: oh, tx
<perrito666> I am mainly curious why now the windows build is broken since I have not been even closer to what blew
<sinzui> perrito666, I suspect bad luck and charms in the aws bundle failure. I will try to retest
<perrito666> tx
<sinzui> perrito666, it was a panic. Since we know this test should pass in 1.24, I will increase the reties. It was set to 1 when we knew it wouldn't pass
<sinzui> perrito666, a successful run is 90 minutes for windows
<mup> Bug #1450118 was opened: vsphere provider should use OVA instead of OVF from cloud images. <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450118>
<sinzui> perrito666, I don't think there is anything for you to look at. I will whip CI to give a success, or concrete failure results
<perrito666> sinzui: tx a lot
<katco> sinzui: will you announce when trunk is open?
<sinzui> katco, I will, but I think CI will may do it before me. We installed the automatic unblocking rules this morning
<katco> sinzui: oh wow, it pings irc now?
<sinzui> katco, no, just uses Lp to send out notification about why a bug is fix released
<natefinch> sinzui: are all the bugs assigned to 1.24-alpha1 required to be fixed for 1.24?  Some of them don't seem like they're in a state to be fixed
<katco> sinzui: ah gotcha
<sinzui> natefinch, I cannot say. when we decided to fork 1.24 from master, all the bugs that said they had to be fixed for the milestone got a new task
<sinzui> natefinch, if an issue is not doable to 1.24, then let katco and alexis know to reduce scope
<katco> natefinch: yeah definitely ping me if you see things that are incomplete so we can further groom that list
<natefinch> katco: I doubt this one will ever be understood, certainly it's not fixable on any kind of a short timeline: https://bugs.launchpad.net/juju-core/1.24/+bug/1392810
<mup> Bug #1392810: upgrade-juju --upload-tools, using 1.20.11, from 1.18.4 upgraded to 1.19.4 instead <canonical-bootstack> <canonical-is> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1392810>
<katco> natefinch: kind of agree... not sure why an --upload-tools bug would block a release since it's a purely dev feature
<natefinch> sinzui: when's feature freeze and/or code freeze for 1.24?
<katco> natefinch: last friday
<katco> natefinch: ff at least
<natefinch> katco: ok
<sinzui> natefinch, regardless of the freeze, we expect some large branches to "fix" incomplete features.
<natefinch> sinzui: understood.  I just thought I saw a feature request targeted to 1.24, which obviously won't make it in
<katco> natefinch: actually that's a great point. we have several things with the feature tag targeted to 1.24-*
<katco> sinzui: should we re-target those do you think?
<sinzui> katco, I do. Lets set honest expectation about what we can do
<natefinch> sinzui: I keep seeing bugs like this one that say at the bottom "Changed in juju-core: milestone: 1.24-alpha1 â 1.25.0"   but they're still assigned to 1.24-alpha1 as well... are they supposed to be in both?
<natefinch> sorry this is the bug I mean: https://bugs.launchpad.net/juju-core/1.24/+bug/1412621
<mup> Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap <bootstrap> <maas-provider> <mongodb> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1412621>
<mup> Bug #1450129 was opened: vsphere provider is missing firewaller, networking implementations <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450129>
<sinzui> natefinch, that is because of Core policy. Every bug must be assigned to a series (and milestone) to track fixes and forward ports
<natefinch> sinzui: ok, so... where/when should that bug be fixed? It's confusing when it's assigned to two different milestones.
<sinzui> natefinch, By forking long before the branch was stable Juju Core agreed to do double merges fore every issue.
<sinzui> natefinch, Core policy is fix it in the oldest stable, then forward port to master. Master is currrently 1.25
<natefinch> sinzui: ok, so 1.25 and 1.24 just means 1.24 (and of course make sure it's fixed in trunk as well)
<sinzui> natefinch, yes exactly that
<natefinch> sinzui: still doesn't seem like it needs to actually say "1.25" on the bug anywhere.  The "also put in master" should be implied.
<sinzui> natefinch, yep, but since 1.22 and 1.23 were troubled by missing merges, the policy was set by wallyworld and alexisb.
 * sinzui thinks forking later and asking features to stay in feature branches is easier for everyone
<katco> time for lunch
<natefinch> I don't, because then it's pig-pile on master once the fork is cut, and whoever gets in last gets a merging nightmare of epic proprotions.  By cutting the branch early, work can be ongoing on master and everyone can happily merge incrementally rather than in one hellish lump.  Most things that go in after the branch is cut should be relatively small, in theory.
<natefinch> sinzui: this bug appears to be mainly a DNS problem in their MAAS setup, not really a juju problem, other than we might be able to improve some error messages: https://bugs.launchpad.net/juju-core/1.24/+bug/1412621
<mup> Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap <bootstrap> <maas-provider> <mongodb> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1412621>
<sinzui> natefinch, +1. Fix the message and let someone fix their setup!
<sinzui> natefinch, I really like saying the bug is not in juju
<natefinch> sinzui: this one really sounds like it's a maas/networking issue, not really juju: https://bugs.launchpad.net/juju-core/1.24/+bug/1355782
<mup> Bug #1355782: Error during bootstrap: TLS handshake failed: x509: certificate signed by unknown authority <bootstrap> <cloud-installer> <landscape> <oil> <juju-core:Triaged> <juju-core 1.24:Triaged> <MAAS:New> <https://launchpad.net/bugs/1355782>
<sinzui> natefinch, I and others agree, but I think we need to prove that. Maybe mark it Incomplete and ask for the information needed to confirm the origin.
<mup> Bug #1450146 was opened: vsphere provider feature flag should apply only to bootstrap <juju-core:In Progress by ericsnowcurrently> <juju-core 1.24:In Progress by ericsnowcurrently> <https://launchpad.net/bugs/1450146>
<sinzui> perrito666, We can now see http://reports.vapour.ws/releases/2576
<sinzui> perrito666, There are two failures that suspiciously align with addressable containers on maas and aws. We are retesting. maybe the maas 1.7 is ill
<perrito666> sinzui: tx, why are there keyboard interrupts in the logs?
<sinzui> perrito666, We know that 1.23 passed these tests before and after the feature flag. Ans we know the aws fix that went into 1.23 has nto yet merged into 1.24 and master
<sinzui> perrito666, timeouts. we try to make tests pass quickly instead of let you wait 5 bours for results
<perrito666> sinzui: k, Ill be checking for new builds, the errors do seem alien to the previous problem
<sinzui> perrito666, in aws the bundle completes between 13 and 17 minutes, with a 30 minutes timeout. I am testing a 45 minute timeout now
<perrito666> iirc, 372 secons here
<perrito666> the bundle you provided against aws
<sinzui> perrito666, since errors are in the aws containers. I am inclined to escalate the bug that has a fix and mark you bug as fix released. master and 1.24 and blocked, but maybe a merge is only a hour away
<perrito666> sinzui: I would wait until your EOD arrives
<perrito666> but in principle, the problem seems to be adifferent bug, which might have been there all the time but masked by this other issue,
<perrito666> dont you love when everybody you need to ask a question to is in the other side of the globe?
<natefinch> perrito666: yep, sucks.
<sinzui> katco, wallyworld: I believe that perrito666's Megawatcher fix is good, but aws and maas are still broken. 1.24 and master are still blocked. I escalated bug 1442801 which appears to be the aws issue and has an unmerged fix available
<mup> Bug #1442801: aws containers are broken in 1.23 <blocker> <ci> <deployer> <ec2-provider> <lxc> <regression> <juju-core:In Progress by dooferlad> <juju-core 1.23:Fix Released by dooferlad> <juju-core 1.24:Triaged by dooferlad> <https://launchpad.net/bugs/1442801>
<katco> sinzui: ty
<sinzui> katco, wallyworld : We are still looking into the maas, failure, hoping it is a substrate issue, not juju
<katco> sinzui: fix for 1442801 just hasn't been forward ported to v1.24?
<katco> am i reading that right?
<sinzui> katco, that is my reading too
<natefinch> sinzui: is this even really a bug anymore?  https://bugs.launchpad.net/juju-core/1.24/+bug/1442719   it sounds like it was just a problem of using an old version of juju that didn't support windows, so we had to do some rigamarole to do the upgrade.
<mup> Bug #1442719: juju sync-tools fails <sync-tools> <windows> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1442719>
<katco> oh my goodness... seen in a SetUpSuite(...): cmd := exec.Command("go", "build", "github.com/juju/juju/cmd/jujud")
<perrito666> yup, the comment above is fantastic
<katco> yeah lol
<natefinch> heh.... I should have figured it was the uniter tests
<katco> that is a new one for me
<katco> performing a compile... in a test...
<katco> https://www.youtube.com/watch?v=6nSKkwzwdW4
<mup> Bug #1450191 was opened: quickstart cannot talk juju on maas 1.7 <api> <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450191>
<perrito666> yay new bug
<natefinch> katco, sinzui: this bug appears to be "juju upgrade-juju fails when the disk is full" ... which does not seem to be something we should really care about fixing: https://bugs.launchpad.net/juju-core/1.24/+bug/1441913
<mup> Bug #1441913: juju upgrade-juju failed to configure mongodb replicasets <canonical-is> <mongodb> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1441913>
<sinzui> natefinch, *reliability* Juju needs to make a sensible decision when it doesn't have resources to justify starting an upgrade
<katco> i can get behind that
<sinzui> Mongo burned its bridges with me when it filled the disk with db and logs for two different deployments.
<natefinch> I gotta run, kids are crazy, wife is sick.    I think that's valid, but that's true of every action we'd try to do
<alexisb> natefinch, quality software is ... ?
<alexisb> natefinch, but go take care of your family first
<natefinch> alexisb: yes... I am just looking for return on investment
<katco> alexisb: we can talk about this on the release call, but nate brought up a great point in that we're past the FF for v1.24, and we should move all new features out past 1.24
 * perrito666 calls his isp to negotiate more upload and runs scared
<katco> alexisb: not sure if this is a feature or not :)
<alexisb> yes there can be a fine line between features and bugs, that is fine
<alexisb> but we need to be careful to not ignore robustness issues
<alexisb> it is important
<katco> absolutely it is important
<alexisb> juju is awesome because it is so powerful
<alexisb> but ...
<perrito666> robustnes is a feature in terms of adding new failure vectors
<alexisb> anyhow we can chat about it on the release call
<katco> perrito666: that is a great way of putting that
<wwitzel3> bugs are just incomplete features ;)
<perrito666> wwitzel3: lol
<perrito666> wwitzel3: software is like an infinite surface of minesweeper :p
<wwitzel3> perrito666: any every time you you don't hit a mine, it is just the number 8
<perrito666> hehe
 * katco now knows what wwitzel3 really does during the day
<wwitzel3> vimsweeper
<wwitzel3> I wonder if that is a thing?
<katco> you can bet in emacs it is
<katco> i know emacs has tetris...
<katco> our kanban is so useful. it's quite plain to see something is wrong by how all of our bug fixes are blocked.
<katco> 1 liner up for grabs: http://reviews.vapour.ws/r/1523/
<perrito666> wwitzel3: all it takes for something to come to existence in vim is for someone to wonder if its a thing.
<katco> ugh. i really need to take a weekend and set up a bunch of environments that i can use for triaging bugs
<perrito666> mmpfh,is alt f2 not present in unity for vivid?
<katco> how do i disconnect from an lxc console? C-a q isn't working
<sinzui> katco, control-d?
<katco> sinzui: nah that just dumps me back to the login prompt
<sinzui> katco, stop the container from another session, then start the container with -n so you can ssh into it
<katco> sinzui: yeah, just wondering how to exit its jail since i always forget to daemonize it
<sinzui> katco, C-a q is correct. I doubt those keys are remapped for you.
<katco> sinzui: not working... wondering if its because it's for a machine i'm sshed into
<perrito666> katco: ca aq
<perrito666> katco: you seem to be suffering of double shell with the same escape clause
<katco> perrito666: tried that as well =/
<perrito666> used to bite me in screen all the time
<perrito666> tried C-a ?
<katco> yeah C-a a q
<katco> C-a q
<perrito666> why in the universe is there no preview in launchpad comments
<perrito666> is there any kind of formatting on the comments?
<katco> perrito666: there is, but i haven't taken the time to figure out what it is
<perrito666> it has quite a user foe approach
<perrito666> sinzui: ?
<sinzui> perrito666, no formatting supported. It was designed 10 years ago
<perrito666> sinzui: damn, if only software was updateable...
<sinzui> perrito666, I use lots of new lines and many comments honour leading spaces
<sinzui> perrito666, If It was I would jump ship to tear down the bug, blueprint, and question trackers
<katco> wow i'm impressed... my old ibook booted right up. G4 1.07GHz with 768Mb ram
<katco> apparently i took really good care of this
<perrito666> katco: apple did
<perrito666> :p
<katco> lol
<katco> i do love apple hardware
<perrito666> ibook g4s where white thinkpads
<perrito666> the ibook g4 was my all time favorite machine, I almost cried when I replaced it
<katco> yeah it was my college machine after some crappy compaq gave up the ghost
<katco> i scrimped and saved for it... good investment.
<katco> haha... how i know it's old: in my browser history: google reader
<sinzui> katco, My daugher might want to buy your old computer. Her favourite marble game doesn't play on the intel Macs
<katco> sinzui: i am toying with the idea of turning this into a build slave
<sinzui>  katco Power7 and power8 isn't going to help
<katco> oh =/
<sinzui> katco, Ubuntu desupported PPC years ago. P8 is different, more than just the switch in endianess
<perrito666> debian might, but iirc, by the time I sold the thing the only os running smoothly was osx.. Tiger?
<katco> i think this is running mtn. lion
<perrito666> katco: does it run ok with less than 1g?
<katco> seems to
<katco> i mean i'm mostly on the terminal, but safari is doing ok
<katco> it's a really old version of safari though... i think i'd have to install FF or something if i wanted to browse seriously
<sinzui> katco, The community is made trusty for G4 http://cdimage.ubuntu.com/releases/14.04/release/
<katco> if anything the original HD is what slows me down the most
<katco> sinzui: well, no point if i can't use it to test juju
<sinzui> katco: true
<sinzui> katco, Our current ppc64el machines will be removed in a few months. One option for QA and Core is to use Canonistack, which might get two power8 machines
<menn0> /wii alexisb
<alexisb> heya menn0
<katco> sinzui: http://juju-ci.vapour.ws:8080/job/github-merge-juju/2966/ has been running for 5h41m?
<sinzui> katco, wow, let me look
<katco> sinzui: it looks like it might be the new functionality
<katco> check-blockers.py or w/e
<sinzui> katco, yes it is
<sinzui> katco, oops, it is waiting for human input. I will have this fixed ina  few minutes, but I will need ti resubmit this for the victim
<katco> sinzui: np
<perrito666> waigani: ping
<waigani> perrito666: pong (or did you mean wallyworld?)
<perrito666> nope, you
<perrito666> waigani: https://github.com/juju/juju/commit/45f84f50 is it possible that the step you add in steps121.go was meant for steps122.go?
<perrito666> I know its an old patch, but its worth trying
<perrito666> waigani: we will know in like 20 mins anyway :p the test is running with the change applied
<waigani> perrito666: cool, reading / remembering. There where a few cases we found where the same upgrade step had to be run for 1.21 and 1.22 - due to the timing of tweaks and releases
<waigani> perrito666: so if it does pass, I wouldn't remove it from 1.21, but also add it to 1.22. Does that make sense?
<perrito666> waigani: the thing is, that patch, and I might be wrong, is not merged into 1.21
<waigani> perrito666: ah
<perrito666> I am right
<perrito666> https://github.com/juju/juju/blob/1.21/upgrades/steps121.go
<perrito666> My guess, the merging got pushed
<perrito666> and it entered after 1.21 cut
<waigani> right, it would make sense for a 1.21 upgrade step to be on 1.21
<waigani> perrito666: shall  I can merge that back into 1.21?
<perrito666> waigani: mm, I am not sure
<perrito666> ask sinzui
<perrito666> btw, kudos on the commit it is very concise it made discovering the issue quite easy when I figured out where it was
<waigani> thanks :) at least I make my issues obvious ;)
<perrito666> waigani: anyway, it is, I think, causing https://bugs.launchpad.net/juju-core/+bug/1447853 so I might fix it as part of that once I learn in which versions we want that applied
<waigani> perrito666: so you're running tests on 1.23 with the upgrade test targeting 1.22 instead of 1.21?
<mup> Bug #1447853: Local charms are not added to storage on upgrade to 1.22.x <charms> <regression> <storage> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:In Progress by hduran-8> <https://launchpad.net/bugs/1447853>
<perrito666> indeed, it was
<perrito666> waigani: the thing is, all upgrade paths are broken because of that
<waigani> perrito666: okay, well if the tests pass, definitely add it as a 1.22 upgrade step
<perrito666> sinzui: please let me know where can this patch be applied, 1.22, 23 and 24?
<perrito666> waigani: cheers, thanks a lot
<waigani> perrito666: actually, I don't think we need to backport to 1.21. All the charm id changes are in the same PR as the upgrade step - intended to hit 1.21 but ended up in 1.22
<alexisb> anastasiamac, ping
<waigani> wallyworld:  I've reverted http://reviews.vapour.ws/r/1512/  to my original PR. could you take a look at my last comment which explains why and let me know if that makes sense?
<wallyworld> waigani: i'm not sure it's correct
<wallyworld> if cBucket, ok := attrs["control-bucket"]; !ok  <--- this will not result in ok = true if control bucket defaults to ""
<wallyworld> hence the control bucket will not be generated
<waigani> wallyworld: yep, that's what I was thinking and it's why I put that logic in. But the TestPrepareInsertsUniqueControlBucket passes without the logic
<wallyworld> so the test is wrong
<wallyworld> :-)
<wallyworld> maybe
<wallyworld> haven't looked
<wallyworld> i bet if you test live it would fail
<waigani> wallyworld: openstack also works like this. It looks like the configDefaults are used for validating unknown attrs - let me track down where the config used in prepare is  being build
<waigani> wallyworld: I'll test live too
<sinzui> perrito666, 1.24 and 1.25 per the bug. 1.22 is dead to me. 1.23 is doable, though we are not committing to a 1.23.3
<sinzui> waigani, we cannot release 1.22 fixes though our PPAs because the version is superseded by 1.23.2. We can merge a fix into 1.23 for 1.23.3, but we are not commiting to a 1.23.3 at this time.
<waigani> sinzui: okay. How do we resolve this bug then: https://bugs.launchpad.net/juju-core/+bug/1447853, we need to add an upgrade step to 1.22 to fix it?
<mup> Bug #1447853: Local charms are not added to storage on upgrade to 1.22.x <charms> <regression> <storage> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:In Progress by hduran-8> <https://launchpad.net/bugs/1447853>
#juju-dev 2015-04-30
<sinzui> waigani, oh, I understand. Yes i think so. I can confirm though that the "upgrade-charm --force" hack is a permanent fix, so envs/users can resolve the problem in minutes if they need too
<sinzui> waigani, We cannot an an upgrade step to 1.22, since we cannot distribute that version any more.
<sinzui> waigani, We need 1.23 to do the fix an 1.24 to do the fix.
<waigani> sinzui: right, got it. perrito666 ^
<sinzui> waigani, one complicate is that our early adopters still believe odd numbered jujus are dangerous, so they only upgrade to even numbers
<waigani> wallyworld: I bootstrapped on aws without issue: http://pastebin.ubuntu.com/10946635/. As I understand it: PrepareForCreateEnvironment is checking UnknownAttrs for control-bucket. configFields, which already had control-bucket, is used to determine what attrs are unknown, thus control-bucket will not be in the unknown attrs, it will be nil. The configDefault is only used to check Coercion of the value types of the attributes:
<waigani> environs/config/config.go:1384
<waigani> wallyworld: this is the same for openstack
<waigani> wallyworld: though if that is correct, and control-bucket will never be an unknownAttr, it begs the question - why check if it's set at all?
<wallyworld> waigani: you are misunderstanding what UnknownAttrs are
<wallyworld> they are attrs specific to different environment types
<wallyworld> control bucket will always be there
<wallyworld> there are other areas in the codebase that will fail if control bucket defaults to ""
<wallyworld> are you sure you didn't have a value set in your env yaml for that bootstrap?
<waigani> wallyworld: ah, sorry that makes sense. Let me check...
<waigani> wallyworld: nothing set in environments.yaml let me read the code again and have another crack at understanding what's going on
<wallyworld> waigani: also, if bucket name is "", bootstrap will not fail but errors will be logged (i haven't seen the code before but just checked then)
<wallyworld> so the tl;dr; is we must not allow bucket name to be ""
<waigani> wallyworld: no errors all-machines.log on machine 0
<wallyworld> waigani: does the jenv file have a control bucket attr?
<waigani> wallyworld: not in .jenv I just bootstrapped again and dumped out the attrs - only access-key and secret-key are set, contol-bucket is not passed through. It is later set by the prepare func. It looks as though empty attrs are being removed.
<mup> Bug #1450265 was opened: juju depends on google.golang.org/cloud/compute/metadata but is not included in dependencies.tsv <juju-core:New> <https://launchpad.net/bugs/1450265>
<waigani> wallyworld: actually no. I still think configFields and configDefaults are used to validate the unknownAttributes - to make sure they are expected attrs for the provider and not typos and the values are of the right type. They are not being passed through as default values to prepare. The config values are being read from environments.yaml
<wallyworld> waigani: when control bucket is generated, it should end up in jenv, if it's not there, that means there's an issue
<waigani> wallyworld: after bootstrap it's there, before it's not.
<wallyworld> correct, it will not be there until after
<waigani> wallyworld: yep, that's whats happening
<waigani> I bootstrapped again to double check that
<wallyworld> hmm, i can't see how it's being generated then if it's in the map (but "")
<wallyworld> not without the extra != "" check
<mup> Bug #1450265 changed: juju depends on google.golang.org/cloud/compute/metadata but is not included in dependencies.tsv <juju-core:Invalid> <https://launchpad.net/bugs/1450265>
<axw> wallyworld: can you please review https://github.com/juju/charm/pull/125 ?
<wallyworld> syre
<axw> ta
<wallyworld> axw: you may have seen, 1.24 and master now have no storage ff and also have upgrade step for block devices
<axw> wallyworld: I have, thank you
<axw> had minor merge conflicts
<wallyworld> just minor, that's good
<natefinch> davecheney: constabulary is a terrible name for a github org
<natefinch> anyone used the canonical VPN?  I can't understand their directions... seems like they're missing a step
<axw> natefinch: yes, what bit are you stuck on?
<natefinch> axw: thanks.   So, I installed  network-manager-openvpn-gnome, restarted network manager, then the instructions say:
<natefinch> Select network manager -> VPN Connections -> Configure VPN...
<natefinch> ....select network manager from where?
<natefinch> ....I think I just fdigured it out
<natefinch> they mean the network icon in the top right there
<axw> yep
<natefinch> .. I had no idea that was called the network manager :/
<natefinch> axw: ok, another dumb question - how do I get the select file dialog to show dot files?  it says to pick the file from the ~/.sesame directory, but the file picker isn't showing hidden directories.
<menn0> ericsnow: can you confirm that the fix for bug 1447446 made it into the 1.24 branch and master? (and update the ticket accordingly)
<mup> Bug #1447446: 1.23.1: bootstrap failure, vivid, local provider <bootstrap> <landscape> <juju-core:Fix Committed by ericsnowcurrently> <juju-core 1.23:Fix Released by ericsnowcurrently> <juju-core 1.24:New> <https://launchpad.net/bugs/1447446>
<axw> natefinch: you can use Ctrl+L to enter the location
<axw> not sure if there's a better way, that's what I do
<menn0> ericsnow: it was one the tickets that was incorrectly targetted
<axw> ah, right click has "show hidden files"
<natefinch> OMG... why doesn't that show all the time :/
<natefinch> ahh yeah, I didn't think to right click there... much better
<ericsnow> menn0: done
<menn0> ericsnow: cheers
<mup> Bug #1447446 changed: 1.23.1: bootstrap failure, vivid, local provider <bootstrap> <landscape> <juju-core:Fix Committed by ericsnowcurrently> <juju-core 1.23:Fix Released by ericsnowcurrently> <juju-core 1.24:New> <https://launchpad.net/bugs/1447446>
<natefinch> axw: Thanks for the help, got it working.  .
<natefinch> sinzui: you around?
<axw> natefinch: cool
<natefinch> axw, menn0: do you guys know how to make the JFDI thing on github work?  I swear I got the syntax right, but it's not going through (still says "doesn't match fixes-blah")
<axw> natefinch: $$__JFDI__$$
<natefinch> :/ did that
<menn0> natefinch: that's what I would have said
<natefinch> wonder if the functionality got broken at somepoint (or maybe it just doesn't like me)
<natefinch> hahaha ... or maybe I mispelled jfdi... :/
<axw> natefinch: hum, not sure what's up with that. I've been doing that today, so it's not broken in general
<natefinch> JDFI is not gonna work
<axw> heh
<axw> that'll do it :)
<menn0> natefinch: maybe check one of axw's uses of it today?
<jw4> menn0: I'm pretty sure that change went in before 1.24 was cut... I'll verify
<menn0> jw4: thanks! i'm not saying it make it I just wanted to be sure. the ticket wasn't targetted correctly so it could have been missed. just update the ticket once you've checked.
<jw4> menn0: will do
<mup> Bug #1450299 was opened: api/client: test fails on ppc64le <ppc64el> <juju-core:In Progress by dave-cheney> <https://launchpad.net/bugs/1450299>
<waigani> wallyworld: the bucket val is being set because it's not in the map. the validation (which would set the default) happens later.
<waigani> wallyworld: I've sent an email with the details
<wallyworld> ok, ty, will look
<wallyworld> waigani: thanks for explanation. i just wanted to be 100% sure that setting the default to "" wouldn't accidentally fail to generate a control bucket if one were not specified, and that config stuff is a bit of a mess which we've managed to break before :-(
<waigani> wallyworld: no, I appreciate it. I now understand what's going on much more, so thanks :)
<wallyworld> waigani: i did too 2 years back but have since only vaugue recollections (nightmares?) about config
<waigani> haha
<wallyworld> the schema default vs omit stuff is really hard to get right
<wallyworld> and it really is a convoluted web
<wallyworld> and we have relased juju which broke set ups in the wild
<waigani> yeah, it would be good to see if there is a better design pattern that could be used to clear things up a bit
<waigani> until then, comments are really important!
<waigani> wallyworld: so is that a shipit? (not that we can land anything)
<wallyworld> waigani: yeah, but if it's a bug fix for 1.24, feel free to jfdi at this point
<waigani> wallyworld: thanks
<wallyworld> axw: this fixes the critical blocker http://reviews.vapour.ws/r/1532/
<wallyworld> hopefully
<TheMue> morning o/
<TheMue> hmm, got a fix for 1.24 but CI doesn't accept it *sigh*
<axw> wallyworld: reviewed
<wallyworld> ty
<mgz> TheMue: if the bug you're fixing is actually urgent, you have discretion - but it seems that issue doesn't actually break our bundle test on maas (it probably should)
<TheMue> mgz: it's a port that of a critical bug in 1.23 to 1.24, where it is only high. don't know why.
<TheMue> mgz: the bug is #1445063
<mup> Bug #1445063: addressable containers cannot resolve non-FQDN in maas <addressability> <cloud-installer> <kvm> <landscape> <lxc> <maas-provider> <network> <oil>
<mup> <openstack> <uosci> <juju-core:Triaged> <juju-core 1.23:Fix Released by dimitern> <juju-core 1.24:In Progress by themue> <https://launchpad.net/bugs/1445063>
<mgz> TheMue: that actually makes some sense - we couldn't release a 1.23 with it, but we have time before we're putting out 1.24 so a
<mgz> and it's not actually stopping any of *us* doing work
<TheMue> ic
<mgz> whereas the deployer stuff being borked prevents us improving our tests to catch issues like lxc networking problems :)
<TheMue> mgz: just saw that dimiter commented is with JFDI
<mgz> yeah, it's fine
<mgz> we don't gain anything from doing it but it's not really harmful
<TheMue> that's great, thx
<wallyworld> axw: updated, i also added an index, i think it may help with the sorts
<axw> wallyworld: you didn't explain the change from updated to created though. is it for performance?
<wallyworld> axw: oh, i see what you're asking, sorry missed the point
<wallyworld> updated is only to the nearest second
<wallyworld> we need much finer grained to get the order right
<wallyworld> most timestamps we store in mongo for juju are to the nearest second
<axw> wallyworld: so why not store a number somewhere and just increment it each time you need to create an entry? using time is fragile, whatever the resolution
<wallyworld> could do, but creating a sequence seems overhead for what is being done here, it's just charm status entries, hardly expected to change even a few times per second
<wallyworld> and the sort is done after filtering for the unit
<axw> wallyworld: if you're confident with that, okay. I would just like to point out that it's trivial to do, and you could drop the additional index. you could just use State.sequence(), and use the result as the _id field
<axw> anyway
 * axw takes a final look
<wallyworld> ok, i'll do that
<mgz> wallyworld: if you get a mo, can I have a stamp on <http://reviews.vapour.ws/r/1473/>
<mgz> (thanks for the status info)
<wallyworld> sure give me a sec
<mgz> it is sadly much larger now, but the commit history should make sense to you
<wallyworld> axw: sequence number used
<axw> wallyworld: thanks
<axw> shipit
<wallyworld> np, thanks for pointing it out
<wallyworld> mgz: i need dinner, i'll look at branch after
<mgz> wallyworld: no probs, won't be landing till this evening I guess
<perrito666> Wallyworld: nice catch on the legacy status
<wallyworld> perrito666: yeah. also found issues with history, incl tests not wired up :-)
<perrito666> The test for legacy was long due (which test was not wired I just briefly went trough the diff and didn't notice them)
<wallyworld> perrito666: status_test.go - the status history tests
<wallyworld> so they were never run nd hence missed picking up bugs
<perrito666> Duh, just saw it dang
<perrito666> Btw
<perrito666> 		+	c.Assert(len(history), gc.Equals, 100)  	c.Assert(history, gc.HasLen, 100)
<perrito666> You are checking twice the same
<wallyworld> oh i added that to make the failures easier to debug
<wallyworld> i'll remove thanks
<perrito666> Sorry for that not being a comment I am in the phone
<wallyworld> np :-)
<perrito666> And getoldesttimetokeep no longer gets time so it might need a name change and comment correction :)
<wallyworld> perrito666: yeah true, same sentiment though :-)
<natefinch> mgz: you around?
<mgz> natefinch: I am here
<natefinch> mgz: I;m trying to connect to stilson-7, but even connected to the VPN, I can't even ping the server, and ssh never connects... any ideas?
<natefinch> mgz: also, does it matter if I use the US or UK VPN?
<mgz> natefinch: shouldn't
<mgz> natefinch: I can get in via our ssh bouncer
<mgz> if the vpn isn't playing ball for you
<mgz> natefinch: also ssh -vv is useful for general debugging
<natefinch> mgz: that might be a good option.  I don't know what's up with the VPN... but if all I'm supposed to need to do is connect to it, and then ssh... yeah, it's not working.
<natefinch> mgz: yeah, it just gets stuck at connecting to the IP..... verifying, stilson-07 is 10.245.67.135 ?
<mgz> natefinch: yup - you should see that ip via either the vpn or the bouncer
<mgz> lunching now
<TheMue> lunch, afk
<mup> Bug #1450437 was opened: Juju bootstrap fails with sub error code 1 <juju-core:New> <https://launchpad.net/bugs/1450437>
<perrito666> sinzui: ping me when you are around please
<sinzui> perrito666, I have just arrived, though a little low on caffeine
<perrito666> I sadly have been awake for too much already
<perrito666> so, I have a fix for https://bugs.launchpad.net/juju-core/+bug/1447853 that should go into 1.22, 1.23 and 1.24 and master :p can I do that? I mean push a fix to 1.22
<mup> Bug #1447853: Local charms are not added to storage on upgrade to 1.22.x <charms> <regression> <storage> <upgrade-juju> <juju-core:Triaged> <juju-core 1.24:In Progress by hduran-8> <https://launchpad.net/bugs/1447853>
<sinzui> mgz, r=me for joyent, but you conflict with my changes in check_blockers
<sinzui> katco, can you ask someone to read the new comment on bug 1437266. Do we need this fixed in 1.24 or 1.25?
<mup> Bug #1437266: Bootstrap node occasionally panicing with "not a valid unit name" <deploy> <destroy-machine> <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1437266>
<sinzui> natefinch, Are bug 1340749 and bug 1450437 the same as bug 1412621. Can I make them dupes?
<mup> Bug #1340749: Replicaset initiation failure reports wrong error <mongodb> <juju-core:Triaged> <https://launchpad.net/bugs/1340749>
<mup> Bug #1450437: Juju bootstrap fails with sub error code 1 <juju-core:New> <https://launchpad.net/bugs/1450437>
<mup> Bug #1412621: replica set EMPTYCONFIG MAAS bootstrap <bootstrap> <maas-provider> <mongodb> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1412621>
<voidspace> dooferlad: TheMue: alexisb: I'm off to the dentist
<voidspace> bbiab
<dooferlad> voidspace: enjoy!
<TheMue> voidspace: just a check or more? have to do that the next days too ;)
<TheMue> voidspace: yes, enjoy
<voidspace> a check, but for all the family...
<perrito666> sinzui: dont forget my question now that you are caffeinated :p
<sinzui> perrito666, 1.22 is superseded in our PPAs by 1.23. and the packaging rules know it is an error to try to release a lower version
<perrito666> soo, 1.23, 1.24, master?
<sinzui> perrito666, alexisb, mramm, and xwwt to agree to purge our ppa and rush a 1.22 fix out...which means 1.23 will be released next week, not today
<sinzui> perrito666, yes. I will add the 1.23 task to the bug
<mup> Bug #1450437 changed: Juju bootstrap fails with sub error code 1 <juju-core:New> <https://launchpad.net/bugs/1450437>
<katco> sinzui: i don't know whether or not bug 1437266 need be fixed in 1.24 or 1.25. it looks like it only affects a few people, so probably 1.25? we should discuss in the release call
<mup> Bug #1437266: Bootstrap node occasionally panicing with "not a valid unit name" <deploy> <destroy-machine> <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1437266>
<katco> ericsnow: stand up
<natefinch> ericsnow: stup?
<natefinch> sinzui: I'll take a look at those bugs in a few minutes
<perrito666> question about upgrades, when upgrading to a minor, are the steps for that whole mayor rerun?
<natefinch> perrito666: not sure.  I would hope not.
<perrito666> natefinch: they are supposed to be idempotent
<natefinch> perrito666: then why do you care? :)
<perrito666> natefinch: I need them to
<fwereade> perrito666, we *should* run any steps that haven't already been run, but I'm pretty we shouldn't rerun them (ok, if we do by accident they should be idempotent... but still)
<fwereade> perrito666, why do you need them?
<perrito666> can anyone stamp this oneliner? http://reviews.vapour.ws/r/1536/
<perrito666> natefinch:
<perrito666> katco: ericsnow ?
 * ericsnow takes a look
<natefinch> perrito666: looking... are there no tests for this stuff?
<ericsnow> perrito666: nice 6 line oneliner :)
<natefinch> right?
<perrito666> natefinch: there are but the migration step was already there in the wrong version
<perrito666> ericsnow: I meant another work but since I dont know it in english my brain replaced with oneliner
<perrito666> sorry
<perrito666> word*
<ericsnow> perrito666: no worries :)
<perrito666> natefinch: or you mean tests for the contents of the step?
<natefinch> perrito666: I mean... this code was wrong before, and obviously no test was failing  because of it.  It would be nice if there were a test that would fail if your addition was not there.
<ericsnow> natefinch: I expect the testing is already there in state/upgrades_test.go
<ericsnow> natefinch, perrito666: ah, there should be a test that checks that the step was run
<perrito666> ericsnow: yes, I should have committed it :p
<perrito666> sorry I had all these changes in patches that a testing script applied after checkout
<ericsnow> perrito666: looks like the test is missing in steps122_test.go
<ericsnow> (or the step, rather)
<ericsnow> perrito666: so TestStateStepsFor122 should be failing
 * perrito666 looks why his  unstaged changes checker did not fail
<perrito666> fixed
<perrito666> ericsnow: can I consider you a holy reviewer or do I need a signature on that stamp?
<ericsnow> perrito666: I wasn't going to bring it up <wink>
<katco> perrito666: i trust ericsnow's judgement
<katco> perrito666: go with it
<natefinch> perrito666: me too
<perrito666> so do I it is the burocratic weight of the stamp what I was looking for, but this is heavy enough for me to let it pass
<natefinch> perrito666: I rubber stamped it
<perrito666> natefinch: I already hit merge anyway :p
<perrito666> I am hurried enough
<natefinch> wow, those are some... ummm.... high level tests
<perrito666> natefinch: ??
<perrito666> there is a separate test for the function being run :) if that is what worries you
<natefinch> sorry, high level isn't the right word..... what's the opposite of comprehensive?
<natefinch> perrito666: ahh, ok
<natefinch> perrito666: just with the changes, it looks like all we're testing is that the upgrade step has the right description
<perrito666> natefinch: but again, htat was already there, only in the wrong version, the merge of the patch got delayed until 1.22 so the steps where added in a place where they would never run
<perrito666> natefinch: well there is no way to test that the upgrade function does what it says
<perrito666> we check that the steps are the right ones based on their description, we know that at least that is what we want to run
<natefinch> perrito666: I'm sure there's a way to test that the upgrade function at least calls AddEnvUUIDToCharms....
<natefinch> mgz, sinzui: I'm trying to connect to stilson, but can't seem to make the connection no matter how I do it.  pure VPN doesn't work for me, I just never can get a response from the IP address.  Martin told me about the CI bouncer, and I can connect to it, but I think I must be messing up the proxy command, because it never hops to stilson from the bouncer.
<sinzui> natefinch, I will get you a ssh config stanza
<natefinch> sinzui: thank you.  I'll be afk for an hour-ish, so no rush
<katco> ericsnow: nttac meeting
<katco> nttca
<katco> ericsnow: hey can you join the nttca meeting again?
<mup> Bug #1450573 was opened: HA and backup recovery tests failed <backup-restore> <blocker> <ci> <ec2-provider> <ensure-availability> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1450573>
<TheMue> anyone willing for a review of a forward port of a fix from 1.23 and 1.24 to now master? see http://reviews.vapour.ws/r/1538/. thx
<natefinch> TheMue: you should reference the original PR so that it's easy to check that the port is the same as the original
<katco> natefinch: running a little behind, may be slightly late
<natefinch> katco: no worries.  I know how it is.
<TheMue> natefinch: will add it
<TheMue> natefinch: done
<natefinch> TheMue: ship it!
<TheMue> natefinch: thx
<natefinch> TheMue: seems like we should be able to have an automated check that a forward port is identical to an existing PR on another branch and skip the second review.   obviously, if changes needed to be made to the PR for the forward port, then it needs human eyes, but for ones like this where it's identical... would be nice to skip that.
<TheMue> natefinch: sounds like a good idea. but now also the fix doesn't match the listed fixes, so I've got to use JNDI
<TheMue> hmm, my CI output shows "Extant directories unknown: gopkg.in/juju/charm.v6-unstable". any idea what this means?
<TheMue> natefinch: any idea here ^^
<natefinch> TheMue: no idea.  sinzui? ^
<sinzui> natefinch, are you using the make-release-tarball script?
<sinzui> natefinch, That error looks like the checks that ensure every package in the tarball is documented.
<TheMue> sinzui: it's me, and I'm only trying to merge a PR into master.
<TheMue> sinzui: only 5 files changes, dependencies are unchanged
<sinzui> TheMue, I think one of things you are trying to merge is injecting charm.v6-unstable. either we document that it is needed, or fix the code to not get it
<TheMue> sinzui: strange, I only ported a fix, a few code lines
<perrito666> TheMue:  if you brought charm v6 definitely its a bigger change, we where using v5 yesterday
<TheMue> perrito666: no, I haven't. no change of imports. and I just scanned the code here, no where found
<sinzui> abentley, jog, something in CI is very wrong. it is trying to upgrade from 1.23.2 to 1.22.2. I think the upgrade-jenkins script was run a few hours too early
 * sinzui looks for downgrade options
<abentley> sinzui: on the bright side, upgrade-jenkins-branches ran to completion without errors :-/
<sinzui> yep Ci is on the new 1.23.2, but we weren't ready for it
<sinzui> abentley, I will take this opportunity to downgrade to 1.21.1 to watch the upgrade
<sinzui> I just got a tax bill for $326,000 for the sale of my home 2 years. ago. I think I am going to pieces right now
<abentley> sinzui: Wow.  Ugh.
<abentley> sinzui: Yeah, you might want to take some time out.
<TheMue> ok, I'll retry then tomorrow
<sinzui> abentley, I think I need an account to explain to the IRS that I did pay tax, on the sale of my house, not the sale of a business under different tax lass
<sinzui> law
<sinzui> perrito666, We had a misfire with the release of the new juju. all the CI machines got upgraded during the test. So it tried an impossible upgrade of 1.23.2 to 1.22.3 :(
<sinzui> perrito666, But CI has a cache of all the old jujus. I installed 1.21.3, then reset all the upgrade tests. All is well! The logs show after upgrade we could set new charms configs with and juju did the right thing \o/
<sinzui> I will update CI to the real stable when we are finished testing 1.22.3
<sinzui> wow, and we just got a bless
<katco> ericsnow: running just a little late brt
<ericsnow> katco: k
<perrito666> Sinzui cool can i fwport  now?
<sinzui> perrito666, yep.
<perrito666> Sweet
<sinzui> perrito666, should I make CI wait for your 1.23 merge? if it will be several hours, I would test 1.24
<perrito666> I am at the dentist currently so it should be 1.5h from now :p
<perrito666> There is a limit to the things you can do with a phone
<sinzui> natefinch, thank you!
<sinzui> natefinch, maybe I should also grab the old broken package and run in there too
<natefinch> sinzui: certainly seems like a good idea.  I wish I knew more of how the packaging process worked, I might be of more help.  But I'm glad I was able to at least do this much.
<sinzui> natefinch, the container doesn't have our build deps ppa attached, so it is pure trusty, but I don't see anything in the ppa that could uncouple: https://launchpad.net/~juju/+archive/ubuntu/golang/+packages
<sinzui> regardless, natefinch you have given me something I can work with
<natefinch> sinzui: I am really glad it's useful.
<sinzui> TheMue, CI s broken as you described in your own branch. I think an upstream change to a repo has broken all builds
<sinzui> TheMue, mgz discovered some months ago that Go will always get all the deps in the git master branch even when they will not be used. It pollutes the tree. mgz's fix was to fix the upstream repo.
<menn0> davecheney: morning
<sinzui> katco, wallyworld: we need to form an angry mob and address bug 1450631.
<mup> Bug #1450631: Something is injecting gopkg.in/juju/charm.v6-unstable into the tree <blocker> <ci> <packaging> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1450631>
<sinzui> I will send an email to the list since it break all merges and CI testing
<katco> sinzui: sorry i was in a meeting... TAL now
<mup> Bug #1450631 was opened: Something is injecting gopkg.in/juju/charm.v6-unstable into the tree <blocker> <ci> <packaging> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1450631>
<jw4> sinzui: it looks like it may be the github.com/juju/jujusvg repo that has a reference to gopkg.in/juju/charm.v6-unstable
<katco> jw4: ty
<jw4> sinzui, katco: I wonder if referring to jujusvg through gopkg.in would resolve this?
<sinzui> jw4, uhg, we didn't change our dep of github.com/juju/jujusvg to get it though.
<sinzui> jw4, maybe
<sinzui> ps jw4, did you get credentials to see test results?
<jw4> sinzui: not yet
<jw4> sinzui: alexisb is still working on that :)
<sinzui> okay, I will see If I can get that to you tommorow jw4
<jw4> sinzui: very likely the trigger: https://github.com/juju/jujusvg/pull/28
<jw4> sinzui: we could ask jujusvg folks to tag our revision 28683402583926ce903491c14a07cdc5cb371adb as v1 maybe
<jw4> sinzui: then we might be able to use gopkg.in
<jw4> s/our revision/their revision/
<sinzui> jw4, katco: while changing our dep to jujusvg in every branch might work around this issue, we cannot go back and change branches we aren't developing. Since we did an unplanned build of 1.22.3 today, I worry that we will not be able to respond to emergencys in older jujus
<jw4> sinzui: jujusvg is not in dependencies.tsv for 1.22
<sinzui> oh, I see what you are saying jw4. I think you are right. and this means that older branches are probably not affected
<sinzui> jw4, jobs that make the tree will always reject when a package is not documented appears in the tree.
<jw4> sinzui: yeah... I'm hoping that using gopkg.in it will pull in only the tagged versions and not the master revisions
<jw4> sinzui: I haven't verified that yet though
<katco> jw4: i think you are correct
<jw4> sinzui: but we can't use gopkg.in on jujusvg until it's tagged
<jw4> (otherwise we just get the default v0 which is master)
<sinzui> Makyo, ^ maybe you can help tag jujusvg so that we can build juju again
<jw4> Makyo: if we could tag revision
<jw4> 28683402583926ce903491c14a07cdc5cb371adb as v1
<jw4> in jujusvg that might help us with this CI issue
<sinzui> Makyo, or someone you know. We are getting gopkg.in/juju/charm.v6-unstable pulled into the juju tree. since it is not documented, the merge and build jobs break
<jw4> Makyo: it might even be better if v1 was a branch vs. a tag
<sinzui> jw4, I might have permission to tag it!
<jw4> sinzui: woot!
<sinzui> well yes I can tag, lets see if I can push tags
<sinzui> jw4, is this what you expect for the tag command?
<sinzui> git tag -a v1 -m "Version 1 for gopkg.in." 28683402583926ce903491c14a07cdc5cb371adb
<sinzui> just "v1"
<katco> what branch is this being triggered on? 1.24?
<sinzui> katco, every branch!
<jw4> katco: 1.24 and master
<katco> godeps -u dependencies.tsv && cat dependencies.tsv |awk '{print $1}' |xargs -I % grep -r "charm.v6" $GOPATH/src/%
<katco> this is not giving me anything useful
<sinzui> 1.23 1.24 and all the feature branches
<jw4> katco: it's a transitive dependency
<sinzui> katco, hence we know something happened outside of juju
<jw4> katco: github.com/juju/jujusvg master refers to charm.v6
<katco> jw4: right, so i am searching all of juju-core dependencies for charm.v6
<katco> and i'm not finding anything?
<jw4> katco: exactly
<jw4> katco: that's why CI is complaining
<sinzui> katco, that is the error, you will not find it because it is not supposed to be there..
<jw4> CI compares what go get builds against dependencies.tsv
<katco> jw4: no as in, none of juju-core's dependencies are referencing charm.v6-unstable
<jw4> katco: that's the problem jujusvg does
<sinzui> katco, and all merging branches, not just ci
<jw4> but not in the version that's pinned
<sinzui> jw4, is the tag just "v1"
<jw4> sinzui: yeah I think so
<katco> jw4: but if not in the version that's pinned, why is it causing problems? effectively juju is not utilizing a version of jujusvg that is referencing charm.v6?
<jw4> katco: so "go get" pulls down jujusvg master which refers to charm.v6, causing charm.v6 to be pulled down
<Makyo> sinzui, jw4 Will gladly tag, but I'm a little curious, why is jujusvg showing up in core?
<katco> jw4: so the ci server is doing a go get, and not a godeps?
<jw4> katco: even go deps does a go get first before syncing to the right revision
<jw4> Makyo: good question
<sinzui> katco, merge might be, but the rules to make the tar ball don't besides, I think godeps is still calling git to do something which is still the problem
<katco> jw4: so the issue is that charm.v6 exists at all on the CI server?
<Makyo> jw4, jujusvg's only consumer should be charmstore
<katco> jw4: not that juju-core uses it?
<jw4> katco: exactly
<katco> jw4: thx for walking me around the block on that
<jw4> katco: because CI checks that all dependencies downloaded are accounted for
<sinzui> katco, not at all
<jw4> Makyo: it's in our dependencies.tsv, but it's not actually referred to in code... I think it's a transitive dependency
<jw4> Makyo: gimme a sec to figure out the third level up :)
<sinzui> katco, anyone who tries to get juju using go get or godeps is getting a tree witn unised deps because Go's use of git is to always checkout master.
<Makyo> jw4, ack, thanks.  I'll work on tagging.
<jw4> Makyo: it's charmstore.v4 that's using it
<sinzui> katco, There are jobs owned by Core to do merging, and jobs owned by QA to make release tarballs for testing. Both jobs fail because the tree contains a package that us not Used. Ubuntu requires us to abort. We cannot accept a unused an undocumented dep
<jw4> Makyo: so it's a legitimate transitive dependency
<katco> sinzui: i understand now. ty
<sinzui> I might be a minute from fixing this if someone can config the tag is just "v1"
<Makyo> jw4, alright, thanks
<jw4> sinzui: Makyo will tag for us... (thanks Makyo)
<katco> well it sounds like you guys have this under control, and i'm well past eod and need to get dinner going
<katco> i'll see you all around
<jw4> katco: control is an illusion
<jw4> katco: ttyl
<katco> jw4: later morpheus ;0
<katco> ;)
<jw4> haha
<sinzui> I think my main dislike of Christmas is lack of control. I don't like surprises, even if they are gift wrapped.
<jw4> sinzui: I'm so there... I hate surprises :)
<Makyo> jw4, sinzui gopkg.in/juju/jujusvg.v1 should now work.  I went with 890de36 because that's the commit before charm.v6 was added, but after the NewFromBundle() api solidified.
<Makyo> jw4, sinzui if need be, I can roll it back.  I went with a branch for v1
<jw4> Makyo: actually I preferred a branch
<jw4> Makyo: that should be fine
<jw4> Makyo: thanks@!
<Makyo> jw4, np, ping if anything else comes up around that.
<jw4> sinzui: now for the test
<jw4> Makyo: will do
<sinzui> jw4, shall I just ask CI to start again?
<jw4> sinzui: we have to update dependencies.tsv... but I just realized... charmstore will have to use the gopkg version
<sinzui> :?
<sinzui> Well this is a pickle
<jw4> sinzui: because it's charmstore that actually causes the transitive download of jujusvg
<jw4> sinzui: a veritable dilly of a pickle
 * sinzui ponders a hack to just purge undocumented deps
<jw4> sinzui: the good news is, it looks like the gopkg.in version works without pulling in charm.v6
<sinzui> :)
<jw4> sinzui: so now we just need to get a PR in to charmstore...
<wallyworld> sinzui: i'm back now, catching up on irc croll back
<Makyo> jw4, sinzui will +1 it
<Makyo> I will, that is.  Vague.
<jw4> Makyo: hehe
<sinzui> wallyworld, jw4: My family insists we celebrate the mixed news my son got into advanced academics and the IRS sent me a $326,000 bill. When  I might hack the scripts to ignore the bad package so that we can test what we have. We are not planning to releasing any of th development branches this week
<jw4> sinzui: that's crazy - congrats on one of those bits of news
<sinzui> :)
<jw4> sinzui: okay - it looks like I'll need at least 2 PR's to convert to gopkg.in anyway
<wallyworld> sinzui: oh wow
<wallyworld> go have fun and drown your sorrows
<wallyworld> surely that bill is a mistake
<jw4> Makyo: I think we're going to need the v1 branch to start right at 28683402583926ce903491c14a07cdc5cb371adb, and then I'll do a PR to update internal references to github.com/juju/jujusvg to use gopkg.in/juju/jujusvg.v1
<jw4> Makyo: the problem is the head of the v1 branch has some breaking API changes (IconFetcher interface) that would cascade out
<Makyo> jw4, ack, sounds good.
<Makyo> jw4, otp, will do after
<jw4> Makyo: thanks...
<jw4> Makyo: I'll be in and out the next few hours, but appreciate it whenever you get that change in
<Makyo> jw4, np, will do
<Makyo> jw4, git's amazing and now that's done, I think.
<jw4> suh-weet!
<jw4> Makyo: thanks again
#juju-dev 2015-05-01
<menn0> wallyworld: ping?
<wallyworld> yo
<menn0> wallyworld: can I have a quick hangout with you regarding a bug fix?
<wallyworld> sure, pop into tanzanite standup https://plus.google.com/hangouts/_/canonical.com/tanzanite-stand
<menn0> wallyworld: I get "trying to join the call" but it's not making it
<wallyworld> hmm, i'll try and invite you
<natefinch> perrito666:  you around?
<natefinch> bah, I need a chrome extension so I can easily filter bug lists on launchpad.
<sinzui> perrito666, I had to retry the build-revision job for your branch. We were getting 503's from Google's repos.
<wallyworld> sinzui: did you see https://github.com/juju/jujusvg/pull/29
<wallyworld> there's an issue merging?
<natefinch> I was looking at that problem... there's a couple repos under github.com/juju that import charm.v6-unstable but it doesn't look like they're referenced from juju/juju
<sinzui> wallyworld, natefinch, yes the fix is in. We got unlucky network immediately after the fix. http://reports.vapour.ws/releases/2587/job/build-revision/attempt/2587
<sinzui> I tried the build a few minutes later and all is well
<wallyworld> sinzui: so how do we merge https://github.com/juju/jujusvg/pull/29 ? is there a bot?
<sinzui> wallyworld, I don't know. since perrito666's merged worked. I think everything is fixed
<wallyworld> oh i see
<sinzui> wallyworld, both git-merge-juju and build-revision agree the tree matches dependencies.tsv
<wallyworld> sinzui: i'm wanting to see how the next CI run behaves with regard to the failed HA and bundle obs
<wallyworld> jobs
<sinzui> wallyworld, I think the issue is simply the underlying git use of master that caused unwanted deps to be pulled.
<wallyworld> gad i wish the go ecosystem had better dep management
<wallyworld> build it
<wallyworld> buint in
<wallyworld> fark, can't type
<sinzui> wallyworld, I was helping homebrew today as well. They couldn't get the right deps either.
<wallyworld> sinzui: oh, build just failed again generating tarball
<sinzui> I need to follow up with them and explain why Go wasn't giving them the versions that juju was asking for.
<wallyworld> yeah, if proper dep mgmt was built into the go ecosystem it would be a lot easier :-/
<natefinch> wallyworld: if we used godep (not godeps), this wouldn't be an issue.
<natefinch> wallyworld: and honestly, it's only an issue because we decided we cared if there was extra stuff in our gopath.  We could stop caring about that, and this "problem" would go away.
<wallyworld> natefinch: that's still not built into the official tool set
<wallyworld> natefinch: we must care about the extra stuff for repeatable packaging ressons
<wallyworld> distro have rules we must follow
<wallyworld> like with backwards compatibility
<wallyworld> not breaking 100000000's of deployments is hard
<mup> Bug #1450631 changed: Something is injecting gopkg.in/juju/charm.v6-unstable into the tree <blocker> <ci> <packaging> <juju-core:Fix Released by makyo> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1450631>
<natefinch> wallyworld: godeps is a lot more in line with what distros would like.  smash all your dependencies into one big repo.
<wallyworld> nooooooo
<wallyworld> not one big repo :-(
<wallyworld> that sucks so badly
<wallyworld> sinzui: CI builds still broken :-(
<wallyworld> Extant directories unknown:
<wallyworld>  gopkg.in/juju/charm.v6-unstable
<sinzui> wow, how did perrito666 get his merge in
<natefinch> what did we do to try to fix that?
<wallyworld> sinzui: which one? the 1.23 one?
<sinzui> oh, is it just 1.24, master, and all devel branches that are affected?
<wallyworld> i kicked off that one earlier
<wallyworld> just 1.24 and master yeah
<wallyworld> i think we need that svg patch
<sinzui> wallyworld, I think so too :(
<wallyworld> natefinch: i think they adjusted the imports to use gopkg and pulled in a tagged version to avoid transitively pulling in the unknown code
<Makyo> I'm taking a look at the CI for jujusvg.  Fix should be in soon.
<wallyworld> \o/
<wallyworld> ty
<sinzui> wallyworld, I disabled http://juju-ci.vapour.ws:8080/view/Juju%20Revisions/job/build-revision/ to give us time to investigate a retest in case CI tries a new build
<wallyworld> ok
<sinzui> wallyworld, If I fall a sleep, you can re-enable it. I just don't want to loose a chance to get you more daya
<menn0> wallyworld: do you think it's worth getting this replicaset initialisation fix into 1.23 as well? are we doing another 1.23 release?
<sinzui> data
<wallyworld> menn0: yes we are
<wallyworld> menn0: if it's fairly straightforward, wouldn't hurt
<wallyworld> sinzui: ok, will do ty
<wallyworld> menn0: there's a bad upgrade bug related to the JES stuff that is being fxed for 1.23
<wallyworld> and 1.22
<wallyworld> so there will be new releases next week
<natefinch> gah, I hate launchpad
<wallyworld> :-)
<wallyworld> why?
<menn0> wallyworld: ok so should I aim for 1.22 as well then?
<menn0> wallyworld: what's the JES upgrade bug?
<natefinch> well, today, because what seems like a simple filter does bizarre things
<wallyworld> menn0: https://bugs.launchpad.net/juju-core/+bug/1447853
<mup> Bug #1447853: Local charms are not added to storage on upgrade to 1.22.x <charms> <regression> <storage> <upgrade-juju> <juju-core:In Progress by hduran-8> <juju-core
<mup> 1.22:Fix Committed by hduran-8> <juju-core 1.23:Fix Committed by hduran-8> <juju-core 1.24:In Progress by hduran-8> <https://launchpad.net/bugs/1447853>
<wallyworld> menn0: upgrade step to add env uuid to charms collection was done in wrong place
<natefinch> wallyworld: if I filter juju-core bugs by "target milestone 1.24-alpha1" ... I get 1 bug listed, even though I know there's like 30 targetted to 1.24-alpha1
<wallyworld> natefinch: you are filtering trunk bugs
<wallyworld> you want https://bugs.launchpad.net/juju-core/1.24/+bugs?advanced=1
<wallyworld> that's why i've been complaining about people entering the milestone incorrectly
<menn0> natefinch: I brought this up in the team meeting yesterday. the same think has bitten me.
<wallyworld> it's only giving you what you ask for
<menn0> natefinch: you need to select the series before doing the search
<wallyworld> you are asking to filter bugs on the trunk series
<menn0> natefinch: it's not obvious
<natefinch> wallyworld: when I go to https://bugs.launchpad.net/juju-core/+bugs  I see a ton of bugs.  Some of them are targetted to 1.24-alpha1.  If I then just click advanced search and click 1.24-alpha1 .... I get 1 bug
<wallyworld> natefinch: because those bugs have the *wrong* milestone
<wallyworld> trunk bugs must not have a milestone intended for a different series
<natefinch> wallyworld: for example, this bug: https://bugs.launchpad.net/juju-core/+bug/1424892
<mup> Bug #1424892: rsyslog-gnutls is not installed when enable-os-refresh-update is false <cloud-init> <logging> <juju-core:In Progress by natefinch> <juju-core 1.24:Fix Committed by natefinch> <https://launchpad.net/bugs/1424892>
<natefinch> wallyworld: under milestone, one of the "affects" is 1.24-alpha1
<wallyworld> natefinch: sure, but the url you are using is looking for trunk bugs
<wallyworld> see the url i pasted
<natefinch> but it shows up in the original unfiltered list
<natefinch> if it shows up there, and I say, of these bugs, show me the ones with milestone 1.24-alpha1.... that one should show up... or it shouldn't show in the original unfiltered list
<wallyworld> no, the trunk bug shows up and that just happens to be targetted to 1.24 also
<natefinch> this is why I hate launchpad
<wallyworld> 1.24-aplha1 bugs show up in trunk searches because thse bugs are *wrongly* targetted
<wallyworld> why?
<wallyworld> you are getting what you ask for
<natefinch> because you have to know all the internal workings to actually use the damn product
<wallyworld> huh, git is *much* worse
<wallyworld> and it's not internal workings
<natefinch> no, I'm not.  That bug is in this list.  It is targeted to 1.24.  I said, show me all bugs in this list that are targetted at 1.24, and it DOESN'T.
<wallyworld> it is set up to handle development series of projects
<wallyworld> it does
<wallyworld> did you use the url i posted
<wallyworld> https://bugs.launchpad.net/juju-core/1.24/+bugs?advanced=1
<wallyworld> you didn;t ask for what youo thought you did
<wallyworld> you asked for all the bugs against trunk
<natefinch> that are also targeted to 1.24
<natefinch> which that one was
<wallyworld> by clicking the milestone?
<wallyworld> that's not a series
<sinzui> natefinch, wallyworld remember that Lp is bat sh*t crazy when working with series
<natefinch> this is why I hate launchpad
<sinzui> bugs.launchpad.net/juju-core <- search only trunk
<sinzui> bugs.launchpad.net/juju-core/1.24 <- search only 1.24
<wallyworld> yes
<wallyworld> yes
<natefinch> how do I search EVERYTHING?
<sinzui> and there is no possible way to search everything
<wallyworld> very logical
<natefinch> I don't give a shit about series
<sinzui> damn it
<sinzui> natefinch, We had a plan to fix this 5 years ago, but it was killed at higher levels
 * natefinch turns off launchpad rant.
<wallyworld> natefinch: lp was designed for UDD
<wallyworld> it fits that use case very well
<wallyworld> or else we wouldn't have ubuntu
<natefinch> I am very happy that it works well for ubuntu :)
<sinzui> natefinch, wallyworld. It doesn't Ubuntu is saddled with a million bugs, but we know most of them don't apply because the code is gone. It is not possible to say a bug is or isn't in juju each time we open a series
<sinzui> sorry, It is not possible to say a bug is or isn't in *Ubuntu* each time we open a series
<sinzui> wallyworld, remember my scripts to chart Lp bugs.
<sinzui> I pulled down the entire DB of bugs so that I could properly search, but since bugs can be moved to different projects, we would see bugs freeze (because the bug was moved out the project) or bugs appear at random (because someone moved the bug into the project). It isn't possible to ask lp for change set to get the bugs for your own db
<axw> wallyworld: https://jujucharms.com/docs/devel/wip-storage
<axw> wallyworld: eh, hangouts just died..
<natefinch> sinzui: I never realized that you can't search across series.  There's no way to find, for example, all bugs in HA, across all series?  That's amazing.
<natefinch> ly bad
<sinzui> natefinch, most project don't user series since they break search
<natefinch> sinzui:  then why are we?
<sinzui> natefinch, most project just use milestone, and the don't fork trunk until they are really sure
<sinzui> natefinch, because juju-core keeps doing multiple series of development
<natefinch> sinzui: branches in VCS do not have to have anything to do with launchpad series
<sinzui> natefinch, Lp wont let you have 3 task to say you need to merge a fix into three branches without using 3 series
<sinzui> natefinch, I think core is mad to try so many lines of development.
<natefinch> sinzui:  we really only have two - master and the release we;re trying to stabilize
<sinzui> natefinch, there were 3 this morning, and secretly 3 now since I need to get a 1.22.3 out
<natefinch> sinzui: that's two and then every old one is still in maintenance mode
<natefinch> shrug
<natefinch> anyway.... I think we should drop series in LP.  I don't really care if you can't use the milestone to show it needs to be put into 1.24 and master.  You can show that in other ways.  Being able to FIND BUGS seems like a pretty good reason not to use series.
<sinzui> wallyworld, can you review this sad branch http://reviews.vapour.ws/r/1543/
<natefinch> but, I gotta go. kids are waking up when they're not supposed to be
<sinzui> natefinch, +1 and stop forking when branches aren't close to stable. much less merging and need to track and loose work
<sinzui> wallyworld, can you review this sad branch http://reviews.vapour.ws/r/1543/
<wallyworld> looking
<wallyworld> lgtm
<sinzui> thank you.
<Makyo> jujusvg CI: I can't get it to work against gopkg.in in  my post-dinner, heavily pajama'd state.  I'll be on it again in the morning.  If needed before then, ping rogpeppe, mhilton, or frankban for faster response.  Writing an email to the rest of the UI team about this now.
<mup> Bug #1450631 was opened: Something is injecting gopkg.in/juju/charm.v6-unstable into the tree <blocker> <ci> <packaging> <juju-core:In Progress by makyo> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1450631>
<Makyo> mup++
<menn0> wallyworld: fix for bug 1441904
<mup> Bug #1441904: jujud won't start if apt-get of juju-mongodb package fails <canonical-is> <upgrade-juju> <juju-core:In Progress by menno.smits> <juju-core 1.23:In Progress by menno.smits> <juju-core 1.24:In Progress by menno.smits> <https://launchpad.net/bugs/1441904>
<menn0> http://reviews.vapour.ws/r/1544/
<wallyworld> looking ty
<wallyworld> menn0: done
<menn0> wallyworld: I see what you're saying but that might have still not worked in the situation that triggered this bug report
<menn0> wallyworld: apt's cache files were corrupted due to the disk having run out previously
<wallyworld> menn0: understood, but in the general case there may be a network error or some issue preventing a monfo update but mongo is still installed
<wallyworld> if apt-cache fails, maybe we look for /usr/bin/mongo?
<menn0> wallyworld: what about "dpkg --list juju-mongodb" instead. strace indicates that hits a much smaller set of files (the pacakge DB rather than the apt cache files)
<wallyworld> menn0: sure, my apt foo sucks
<wallyworld> whatever is the most reliable way
<wallyworld> so long as juju attempts to verify that some mongo is installed
<menn0> wallyworld: kk
<menn0> wallyworld: I just found that after attempting to apt-get the juju mongodb package EnsureServer checks that the binary exists where it expects it
<menn0> wallyworld: so it looks like we get that for free
<menn0> wallyworld: although it falls back to $PATH if it can't find mongo at the location that juju-mongodb puts it
<menn0> wallyworld: which seems a little crazy
<menn0> wallyworld: who know which version of mongo it'll get
<menn0> wallyworld: ah I see. we use a non-juju-specific monogo package for precise through saucy
<wallyworld> menn0: yes
<wallyworld> 2.4.6
<wallyworld> the juju one is 2.4.9 i think
<menn0> wallyworld: so I think the PR is fine as it is.
<menn0> wallyworld: if mongo really isn't installed after the apt-get attempts then EnsureServer still errors
<wallyworld> great, thanks for checking up
<wallyworld> that rings a bell know
<wallyworld> now
<wallyworld> i had forgotten that
<menn0> you mean you can't remember every aspect of the codebase?
<menn0> :-p
<wallyworld> just makrk my comment as a fuck off
<wallyworld> not *every* bit
<menn0> how about "politely dropped"
<menn0> ?
<menn0>  :)
<wallyworld> sure :-)
<menn0> ok merging now
<menn0> wallyworld: do we need to JFDI even for the 1.23 and 1.24 branches?
<wallyworld> menn0: not 1.23
<wallyworld> just 1.24
<menn0> wallyworld: ok
<axw> wow, disabling all providers except openstack cuts jujud size by 26MB :o
<wallyworld> 26MB!!
<jw4> Makyo: thanks!
<jw4> Makyo: we can pick it up again tomorrow
<mup> Bug #1450701 was opened: Juju CLI compatibility option <status> <juju-core:In Progress by wallyworld> <juju-core 1.24:In Progress by wallyworld> <https://launchpad.net/bugs/1450701>
<menn0> anastasiamac: your PR failed due to good old bad record MAC
<menn0> anastasiamac: you should resubmit
<mup> Bug #1450706 was opened: juju-core 1.23.2 fails with an error on destroying a local environment on vivid <juju-core:New> <https://launchpad.net/bugs/1450706>
<axw> wallyworld: still having trouble with cinder, but it looks like it might be an infrastructure issue. the volume goes to "attaching" but never progresses. I think I'll just propose anyway
<wallyworld> ok
<wallyworld> i have to duck out for school pickup, will be back in a bit
<mup> Bug #1450706 changed: juju-core 1.23.2 fails with an error on destroying a local environment on vivid <juju-core:New> <https://launchpad.net/bugs/1450706>
<mup> Bug #1450706 was opened: juju-core 1.23.2 fails with an error on destroying a local environment on vivid <juju-core:New> <https://launchpad.net/bugs/1450706>
<mattyw> morning all - any jes folks still around?
<mattyw> ah rogpeppe1 you might know
<mattyw> rogpeppe1, actually - forget that, I'm all good
 * rogpeppe1 forgets everything anyway
<rogpeppe1> mattyw: i'm interested what you might want to have asked anyway
<mattyw> rogpeppe1, if I'm talking about sending some data from a Uniter to a State Server... I wondered if these days it's more accurate to say Uniter -> Jes?
<rogpeppe1> mattyw: the two terms are equivalent
<mattyw> rogpeppe1, I guess so, except when folks say jes in my head I always think - one or more server, but when folks say State Server I always assume just 1
<rogpeppe1> mattyw: they're just different endpoints on the same server
<axw> wallyworld: FYI, I switched to canonistack's lcy02 region and it all worked
<axw> wallyworld: also, "juju status-history" is very handy :)
<wallyworld> axw: awesome, just looking now
<wallyworld> :-)
<mgz> axw you're going to clash nastily with my os provider branch
<wallyworld> axw: why define a const autoAssignedMountPoint = ""   ?
<wallyworld> mgz: i hope your demo went well?
<axw> mgz: changes to existing OS code is mostly just import changes, shouldn't be too much of a hassle
<mgz> wallyworld: it stumbled a bit at the start :)
<wallyworld> oh ? :-(
<mgz> axw: right, but we did the same import change, so one of us will have the to merge and resolve
<axw> wallyworld: for self-documentation -- better than just a "" literal
<mgz> wallyworld: the machine didn't come out of pending, and I waited a bit too long,
<axw> mgz: ah ok.
<mgz> destroying the service and deploying again worked fine
<axw> what's the demo?
<wallyworld> axw: ah, i see, the comment confused me but it makes sense now
<mgz> wallyworld: I have a note from curtis, that I want to check with you
<mgz> axw: I showed of the status-set stuff again
<axw> cool
<mgz> wallyworld: as I understand it, he wants CI to build the 1.22 branch next, so it's ready to go on release when he is up
<mgz> so, build-revision is currently disabled
<wallyworld> mgz: yes, i think we can enable that job now
<mgz> when the current CI test run is done, I'm going to re-enable that, with the cron set to do 1.22 first, and let that go through all our tests
<wallyworld> mgz: he added a hack to delete the unintended source
<wallyworld> sounds good
<mgz> wallyworld: the versioning the svg thing worked for now right?
<mgz> I'm going to finish a branch now to resolve deps differently that I've sat on for a while,
<mgz> which is at least a stop-gap till godeps can get uncoupled from go get being silly
<axw> mgz: why can't we just use "godeps -u", then delete everything under GOPATH that doesn't existing in dependencies.tsv ?
<axw> or is that what you're going to do
<mgz> that's what my branch does
<mgz> I don't really *like* it, because it relies on our build/test stuff to catch dep errors more, but for juju-core at least we should break noisily if we remove a dep from the tarball if we actually require
<wallyworld> mgz: not sure, i know curtis added a hack to delete the unexpected source dir
<mup> Bug #1450729 was opened: juju should be able to use nodes acquired by the same user in MAAS <landscape> <juju-core:New> <https://launchpad.net/bugs/1450729>
<mgz> enabling build-revision
<dooferlad> voidspace: can we jump in a hangout and talk more about container addressing?
<wallyworld> mgz: when will the next 1.24 CI build trigger?
<mgz> wallyworld: depends if this 1.22 goes through fine, I'm going to hold CI for it
<wallyworld> ok
<mgz> wallyworld: sinzui's message said you had debugging you needed to do on something?
<voidspace> dooferlad: yes, let me grab coffee first
<mgz> please yell if you need a hand with anything
<wallyworld> mgz: not really, i the quickstart issue was identifed and fixed in juju. a subsequent CI run failed, but that was because the charms simply were still installing when the CI test decided to time out. i can't see how juju is at fault off hand. so i'm curious to see another run
<wallyworld> mgz: fwiw, a master CI run also failed with machines still being allocated when the test gave up
<wallyworld> so maybe there's a slow cloud at work
<dooferlad> voidspace: I am in the standup hangout
<mgz> wallyworld: one thing that may be relevent on this
<mgz> wallyworld: the testing expects status to be responsive - this is a user requirement
<mgz> wallyworld: so, we have a timeout for the whole job, but the test will also fail if we didn't have regular feedback from status during it
<voidspace> dooferlad: omw
<voidspace> dooferlad: are you sure you're in the standup hangout?
<voidspace> I don't see you there...
<wallyworld> mgz: that shouldn't be a problem, the mechanism to update status hasn't changed as such. what does the deployer do? use megawatcher and expect updates to the bacjking doc?
<wallyworld> the same underlying db status watcher is used
<mup> Bug #1450737 was opened: provider/openstack: cinder provider should reject attempts to create non-persistent volumes <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1450737>
<mup> Bug #1450740 was opened: provider/openstack: volumes are recorded with 0 size <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1450740>
<mgz> wallyworld: yeah, it shouldn't be anything remarkable
<wallyworld> mgz: so maybe we do need to debug, but at least the new status actually provided good info as to what the system is doing. i also saw a bunch of config change hook errors. so something is not right
<mgz> wallyworld: I'm a bit confused with the newest 1.24 and master branches... it's just too messy. I'm hoping we get bad to a sane state next week.
<wallyworld> i hope for before then
<mgz> I'm not super happy with you guys smashing stuff in with jfdi while we're trying to sort out release issues
<wallyworld> the stuff landing is all storage, which is not used unless the charm is configured
<mgz> I'm really pleased everyone has been picking up bugs and fixing them
<mgz> but this has just been all over the place
<wallyworld> the blocking bugs have been quickstart related and also a bit of networking and the quickstart thing has been fixed
<wallyworld> in terms of quickstart was failing because juju was mis reporting status
<mgz> but we can't get sane confirmation that the bug fixes are good on 1.24/master
<wallyworld> sure, but that's not due to storage code which is unused
<wallyworld> unkess the charm requests it
<mup> Bug #1450737 changed: provider/openstack: cinder provider should reject attempts to create non-persistent volumes <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1450737>
<mup> Bug #1450740 changed: provider/openstack: volumes are recorded with 0 size <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1450740>
<mup> Bug #1450737 was opened: provider/openstack: cinder provider should reject attempts to create non-persistent volumes <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1450737>
<mup> Bug #1450740 was opened: provider/openstack: volumes are recorded with 0 size <storage> <juju-core:Triaged> <https://launchpad.net/bugs/1450740>
<dimitern> dooferlad, hey there
<dimitern> dooferlad, can you review http://reviews.vapour.ws/r/1549/
<rogpeppe1> is there anyone around that might be able to review a change to godeps, please? https://codereview.appspot.com/230460044
<rogpeppe1> it's a fix for a very annoying issue that's been a problem for ages
<wallyworld> sinzui: jenkins seems idle. should it be doing a CI run?
<wallyworld> i'd be interested to see a 1.24 run
<sinzui> wallyworld, I had to change CI last night to ensure I get 1.22 tested and if it failed, It had to stop so humans can learn why just a version change broken it
<wwitzel3> katco: omw .. hangout is giving me issues
<katco> wwitzel3: np
<wwitzel3> natefinch: took your tips from the ML on forward porting and put them in a doc, edits/corrections welcome. http://reviews.vapour.ws/r/1551/
<natefinch> wwitzel3: awesome!  Should be -m 1 <sha> for the cherry-pick though
<natefinch> wwitzel3: don't ask what the 1 means, I don't know, but it works :)
<wwitzel3> ericsnow: in reviewboard, can we make the pull request link be [url](url)
<ericsnow> wwitzel3: probably :)
<ericsnow> wwitzel3: I'll take a look
<ericsnow> wwitzel3: that has bugged me too
<wwitzel3> ericsnow: it is only frustrating because of the double click to edit
<wwitzel3> ericsnow: so when I try to highlight the link, it awlays goes in to edit mode
<wwitzel3> ericsnow: you can tell me where the code is and I'll make a PR :)
<ericsnow> wwitzel3: https://bitbucket.org/ericsnowcurrently/rb_webhooks_extension
<ericsnow> wwitzel3: :)
<wwitzel3> ericsnow: you have a PR :)
<ericsnow> wwitzel3: first one! :)
<ericsnow> wwitzel3: thanks!
<natefinch> wwitzel3: thanks for doing that!   I kept meaning to mention that, too.
<alexisb> wwitzel3, ping
<natefinch> someone needs to ban wallyworld until he can fix is IRC client
<rick_h_> natefinch: or update yours to not show join/part? :P
<redelmann> Hi, where i can found more info about "juju user"?.
<natefinch> rick_h_: I like seeing when people join or leave the channel... it's useful information, except when people's irc clients go on the fritz and spam the damn channel.
<natefinch> ericsnow: check out github.com/natefinch/plugin
<ericsnow> natefinch: that's pretty cool
<ericsnow> natefinch: make sure to close the pipes in error cases in Start
<natefinch> ericsnow: ahh yeah, thanks.
<ericsnow> natefinch: you're right; it is simple :)
<ericsnow> natefinch: nice job
<natefinch> ericsnow: the nicest thing about it is that, aside from the resources required to run a second process, and the slight overhead of serialization, to users and developers, it works much like in-process plugins.
<ericsnow> natefinch: yep
<natefinch> for devs it's just function calls. For users, they just drop the plugins in a folder and forget about them.
<ericsnow> natefinch: now that I've seen it I'm much more open to the idea of using a local RPC like that for providers :)
<ericsnow> natefinch: plugins should be pretty package-able too
<natefinch> ericsnow: yeah... it's a lot different than what I normally think of when someone says "RPC".  I think a separate, long-running process that listens on port or socket, etc.... there's a lot of complexity and a lot that can go wrong.
<ericsnow> natefinch: that's exactly the same concern I had
<natefinch> ericsnow: I happened to be looking into ways to do plugins, and saw someone mention using RPC over stdin/stdout and it was like OH!  That's so much better!
<natefinch> ericsnow: fixed the closing pipes issue, thanks
<alexisb> perrito666, you around today?
<natefinch> alexisb: in theory it's a holiday for Argentina today
<alexisb> natefinch, ack thanks
 * natefinch still has the argentinian holiday calendar turned on in google calendar :)
<alexisb> ericsnow, natefinch in perrito666 absence have you guys taken a look at this bug:
<alexisb> https://bugs.launchpad.net/juju-core/+bug/1450573
<mup> Bug #1450573: HA and backup recovery tests failed <backup-restore> <blocker> <ci> <ec2-provider> <ensure-availability> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1450573>
<alexisb> it is still causing CI to fail, was curious if you guys agreed with Ian's findings
<natefinch> alexisb: I haven't done more than skimmed it.  I don't know if perrito666 looked into it, though he spent a lot of yesterday at the dentist, so probably not.
<natefinch> alexisb: I can take a look now... was just looking for a bug to pick up.
<alexisb> natefinch, sweet, thank you
<ericsnow> alexisb, natefinch: FWIW, having looked at the logs, Ian's conclusions seem reasonable
<alexisb> natefinch, there are new failures in the lastest run of CI
<alexisb> not sure if those logs have anything new
<alexisb> but if you agree with Ian's findings please make note in the bug as I iwll need to follow-up with the QA team
<natefinch> alexisb: I'm with Ian and eric, it seems like timeouts due to slow performance.  There's a linked bug about us having race conditions where we install stuff and then try to use it without being sure that the install is really finished.
<natefinch> EOD for me.
<bdx> Hows it going everyone?
<bdx> Would you consider adding a config-flags param to ceph and ceph-osd?
<mup> Bug #1450191 changed: quickstart thinks the unit is started when it's still being installed <api> <blocker> <ci> <quickstart> <regression> <juju-core:Fix Released by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <juju-quickstart:Invalid> <https://launchpad.net/bugs/1450191>
<mup> Bug #1450912 was opened: quickstart is universally broken in 1.24 <blocker> <ci> <quickstart> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450912>
<mup> Bug #1450917 was opened: deployer on maas 1.7 cannot complete <blocker> <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450917>
<mup> Bug #1450919 was opened: many window unit tests failures <blocker> <ci> <regression> <windows> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450919>
<mup> Bug #1450917 changed: deployer on maas 1.7 cannot complete <blocker> <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450917>
<mup> Bug #1450919 changed: many window unit tests failures <blocker> <ci> <regression> <windows> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450919>
<mup> Bug #1450917 was opened: deployer on maas 1.7 cannot complete <blocker> <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450917>
<mup> Bug #1450919 was opened: many window unit tests failures <blocker> <ci> <regression> <windows> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1450919>
<wwitzel3> ericsnow: ping
<ericsnow> wwitzel3: hey
<wwitzel3> want to do some interfacingfoo?
<ericsnow> sure
<wwitzel3> katco: ping, if you're around
<ericsnow> wwitzel3: moonstone?
<wwitzel3> ericsnow: works for me
#juju-dev 2015-05-02
<mup> Bug #1441826 changed: deployer and quickstart are broken in 1.24-alpha1 <api> <blocker> <ci> <deployer> <quickstart> <regression> <juju-core:Fix Released by hduran-8> <juju-core 1.24:Fix Released by hduran-8> <https://launchpad.net/bugs/1441826>
<mup> Bug #1451100 was opened: TestCheckProviderProvisional fails on ppc64 <ci> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1451100>
<mup> Bug #1451100 changed: TestCheckProviderProvisional fails on ppc64 <ci> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1451100>
<mup> Bug #1451100 was opened: TestCheckProviderProvisional fails on ppc64 <ci> <ppc64el> <regression> <test-failure> <juju-core:Triaged> <juju-core 1.24:Triaged> <https://launchpad.net/bugs/1451100>
<mup> Bug #1451104 was opened: Joyent machines fail to fetch tools <ci> <deploy> <joyent-provider> <reliability> <juju-core:Triaged> <https://launchpad.net/bugs/1451104>
<mup> Bug #1451104 changed: Joyent machines fail to fetch tools <ci> <deploy> <joyent-provider> <reliability> <juju-core:Triaged> <https://launchpad.net/bugs/1451104>
<mup> Bug #1451104 was opened: Joyent machines can fail to fetch tools <ci> <deploy> <joyent-provider> <reliability> <juju-core:Triaged> <https://launchpad.net/bugs/1451104>
#juju-dev 2015-05-03
<thumper> morning folks
<wallyworld> thumper: hey, nice week away?
<thumper> wallyworld: yep
<thumper> a week without signal even
<wallyworld> awesome
<thumper> going through my 1900 work emails
<wallyworld> and now 1000000 emails
 * thumper nods
<thumper> luckily I have a reasonably aggressive delete filter
<wallyworld> thumper: i have a couple of review needing to be done also when you are fee :-) since you are OCR :-) for the 1.24 release
 * thumper nods
<wallyworld> \o/
<menn0> wallyworld: ping
<wallyworld> menn0: in standup be with you in a sec
<menn0> wallyworld: np
<wallyworld> menn0: wazzup
<menn0> wallyworld: so i'm looking at the fix for bug 1441913
<mup> Bug #1441913: juju upgrade-juju failed to configure mongodb replicasets <canonical-is> <mongodb> <upgrade-juju> <juju-core:In Progress by menno.smits> <juju-core 1.23:In Progress by menno.smits> <juju-core 1.24:In Progress by menno.smits> <https://launchpad.net/bugs/1441913>
<menn0> wallyworld: the idea of doing the replicaset init as an upgrade step isn't working out
<wallyworld> bollocks
<wallyworld> seemed like a great idea at the time
<menn0> wallyworld: b/c mongo is started with --replSet and won't let you do anything until you initialise the replicaset
<menn0> wallyworld: but we can't start upgrade until mongo is up and running
<menn0> wallyworld: chicken and egg problem
<wallyworld> hmmm
<menn0> wallyworld: so i'm looking at changing ensureMongoServer so it always attempts the replicaset init stuff
<wallyworld> maybe we need a 3rd type of uprade :-)
<menn0> wallyworld: all the pieces seem to handle the case of their part already having been done
<menn0> wallyworld: it means a bit more work each machine agent startup
<menn0> wallyworld: but would be safer
<menn0> wallyworld: just wondering if that seemed sensible to you or if you had any other ideas
<wallyworld> quick hangout, i can't quite get my head around the exact steps
<wallyworld> ?
<menn0> wallyworld: ok
<wallyworld> onyx su
#juju-dev 2016-05-02
<thumper> menn0: https://github.com/juju/gomaasapi/pull/53
<menn0> thumper: will look shortly
<thumper> kk
<veebers> Hi all, have an issue deploying mediawiki-single as per the 'getting started' guide (that I'm going through). The mysql unit (lxd) fails to start, looks like it cannot setup the storage (Fatal error: cannot allocate memory for the buffer pool). Appears there is no swap available etc. too. Any thoughts on how to debug this?
<veebers> ugh, I was wrong about swap it seems. plenty there and free memory too
<menn0> thumper: LGTM
<menn0> veebers: so the machine for the mediawiki unit has come up but mediawiki isn't starting?
<veebers> menn0: correct
<veebers> menn0: status was "Hook start failed", I ssh-ed in to check error logs, mysql failed when trying to start the storage backend
<veebers> menn0: I've, uh, since blown it away and I'm trying to just deploy a mysql unit now
<menn0> veebers: i'm no mysql expert I'm afraid. it would be useful to see more of the logs around the failure though.
<menn0> veebers: maybe one of the mysql options set by the mediawiki-single bundle isn't right? (https://api.jujucharms.com/charmstore/v5/bundle/mediawiki-single-9/archive/bundle.yaml)
<veebers> menn0: Hmm, My initial thought on the error may be a red herring. The single mysql deploy failed too, similar logs. It appears something bad happens at the start and it keeps trying then at some point the storage backend can no longer even be intantiated.
<veebers> menn0: logs: http://paste.ubuntu.com/16184547/ (line 52 there seems like the first bad thing, aborts on the next line)
<menn0> veebers: weird... all I can suggest is to look at the mysql logs for the machine hosting the unit and the logs for the juju unit itself.
<veebers> menn0: aye, posted is the mysql error log.
<menn0> veebers: I just did some digging. it seems like mysql is wanting to allocate 12.5GB for it's buffer pool. This comes from the "dataset-size: 80%" (I'm guessing your machine has 16GB of RAM)
<menn0> veebers: this is fine on a completely isolated machine but with lxd, they're all seeing the same available memory so there's probably not enough memory left after mediawiki is installed (and whatever else is running on your machine)
<menn0> veebers: it would be interesting to see what happens when you deploy mysql with "dataset-size: 20%" or something
<menn0> veebers: actually, better idea. before deploy into the model do this: lxc profile set juju-<your-model-name> limits.memory 1GB
<menn0> that will cause all lxd containers for the model to be limited to 1GB of RAM
<menn0> then mysql will only attempt to grab 80% of 1GB
<menn0> veebers: I bet that would do the trick
<menn0> tweak the limit as you like of course
<veebers> menn0: interesting, I've learned someting new. I'll have a crack at trying that. Thanks :-)
<menn0> veebers: I learned a few things too :)
<veebers> menn0: is that 'lxc profile' or 'lxd profile'
<menn0> veebers: lxc
<menn0> the command to interact with lxd is lxc (confusingly)
<menn0> the lxd binary is the daemon itself
<veebers> hah awesome, thanks for clarifying.
<mup> Bug #1551141 changed: Juju bootstrap local - cannot get replset config: not authorized for query on local.system.replset <juju-core:Expired> <https://launchpad.net/bugs/1551141>
<menn0> thumper: review please https://github.com/juju/juju/pull/5322
 * thumper looks
<thumper> menn0: done
 * thumper afk for a bit
<thumper> off to the storage unit
<menn0> thumper: cheers
<menn0> thumper: interestingly this was filed very recently: https://github.com/juju/juju/pull/5322
<menn0> thumper: this even: https://bugs.launchpad.net/juju-core/+bug/1576851
<mup> Bug #1576851: juju debug-log -i unit-rabbitmq-server-0 is unfriendly <debug-log> <juju-release-support> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1576851>
 * thumper looks
<thumper> yeah
<alexisb> katco, when you are in please ping me
<perrito666> now that is an existencial request
<perrito666> oh, missed the in :p
<alexisb> perrito666, :)
<natefinch> mgz: looking at the curl windows/centos SSL bug
<natefinch> mgz: what version of curl are we using on centos?
<natefinch> mgz: a quick google says older versions didn't have tls 1.1 or 1.2 enabled by default
<natefinch> sinzui: ^ alternatively, do can you give me access to an example centos machine?
<natefinch> RHEL-7 (lib)curl does not enable TLS > 1.0 by default.  Please use the --tlsv1 option of curl to negotiate the highest TLS version supported by client/server.
<sinzui> natefinch: I think I can give you one based on a snapshot of the current host that runs unit tests. It will take a while because I need to make a snapshot of the current one.
<sinzui> natefinch: in direct answer to your question
<sinzui> curl --version
<sinzui> curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.19.1 Basic ECC zlib/1.2.7 libidn/1.28 libssh2/1.4.3
<sinzui> natefinch: one moment I got confused. your working on bug 1576873 in our masses
<sinzui> ?
<mup> Bug #1576873: Juju2 cannot deploy centos or windows workloads on maas 1.9 <blocker> <centos> <ci> <maas-provider> <regression> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1576873>
<natefinch> sinzui: yes
<sinzui> natefinch: the curl version might be different. Probably not because I think yum is used to install curl
<natefinch> sinzui: it probably doesn't matter. The error is fairly specific.
<natefinch> sinzui: pretty sure if we just add --tlsv1, it'll work.  But having a real centos machine to poke at would help ensure the fix is correct without having to go through a full commit & CI run
<natefinch> I guess I could always just fire up my own VM
<mup> Bug #1577415 opened: resource-get hangs when trying to deploy a charm with resource from the store <juju-core:New> <https://launchpad.net/bugs/1577415>
<sinzui> natefinch: Our centos are stock centos7 with some yum packages installed.
<natefinch> sinzui: hmm, ok.  from starting the stock centos image on GCE, it looks like it is using a new enough version of curl to support TLS 1.2
<sinzui> natefinch: I am doploying a cento7 on the maas 1.9. I don't expect it to be different, but I want to make sure.
<natefinch> sinzui: thanks
<natefinch> sinzui: I'd like to take a look when it's ready
<sinzui> natefinch: okay, this will be an aventure. have you got ssh rules to get into munna?
<natefinch> sinzui: probably not, since I don't even know what munna is
<sinzui> natefinch: okay. once I am in, I will send you several ssh stanzas that will allow you to hop though all the inermediate hosts
<natefinch> sinzui: good times
<katco> ericsnow: standup time
<katco> alexisb: i think cherylj is out, so pinging you :) can we make this a blocker for 2.0 overall? 1577415
<katco> alexisb: bug 1577415
<mup> Bug #1577415: resource-get hangs when trying to deploy a charm with resource from the store <juju-core:New> <https://launchpad.net/bugs/1577415>
<alexisb> yes we can add it to the list of blockers
<katco> alexisb: also, we're trying to get 1-pagers complete for a review tomorrow. can we work on that, or do we still need to be working on blocker bugs?
<natefinch> katco: we all just bailed. basically done anyway
<mbruzek> Does anyone know how to get to the controller machine using Juju commands? One machine didn't provision for me, and I am trying to get the logs from the controller, but I don't know how to refer to it using juju commands.
<katco`> ericsnow: natefinch: redir: hilarious timing. right after you asked that, my power went out
<natefinch> mbruzek: juju switch controller:admin && juju ssh 0
<mbruzek> thanks natefinch I am going to add that to the developer documentation
<natefinch> mbruzek: (where controller is the name of the controller, obviously)
<mbruzek> Yes.
<katco> ericsnow: natefinch: ok plan for specs vs. bugs
<katco> ericsnow: please time-box your work on the spec to lunch, and then switch over to bugs
<katco> natefinch: please just keep working on bugs
<alexisb> perrito666, when you have a second I would like to chat with you
<natefinch> katco: cool thanks
<ericsnow> katco: k
<katco> ericsnow: natefinch: i'll send out another email to ian letting him know the priority call
<perrito666> alexisb: is now ok?
<katco> ericsnow: natefinch: ta!
<alexisb> perrito666, of course
<perrito666> hangout or irc?
<katco> ericsnow: when you do pick up another bug, bug 1576913 looked related to what you were working on friday.
<mup> Bug #1576913: StatusHistorySuite.TestPruneStatusHistory <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1576913>
<ericsnow> katco: k, thanks
<katco> natefinch: and this looked related to the area you're in: bug 1577415
<mup> Bug #1577415: resource-get hangs when trying to deploy a charm with resource from the store <juju-core:New> <https://launchpad.net/bugs/1577415>
<mup> Bug #1576705 changed: cloudImageMetadataSuite.TestSaveDiffMetadataConcurrentlyAndOrderByDateCreated wrong order <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1576705>
<katco> natefinch: err... wrong bug: 1576695
<katco> natefinch: bug 1576695
<mup> Bug #1576695: Deployer cannot talk to Juju2 (on maas2) because :tlsv1 alert protocol version <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1576695>
 * katco spams the channel
 * natefinch breaks everything
<natefinch> honestly, I'm pretty happy breaking people who aren't supporting the most secure connection possible.. .especially when it's not exactly a bleeding edge configuration
<natefinch> I wish we could have aliases for clouds, so when I type juju bootstrap gce gce or ec2 ec2, it actually worked
<natefinch> also, defaulting to the name of the cloud would be nice
<natefinch> katco: yeah, that's definitely due to the TLS change. Oddly enough, the error message is ssl.SSLError: [Errno 1] _ssl.c:510: error:1409442E:SSL routines:SSL3_READ_BYTES:tlsv1 alert protocol version  .... SSL3??? definitely not secure. We weren't even supposed to be supporting that previously. If it worked with SSL3 before, that was a bug
<katco> natefinch: glad it is in capable hands :)
<natefinch> katco: do you know who controls the deployer code?  I don't even know where it lives or who to talk to about it
<katco> natefinch: that is ecosystems, i.e. marcoceppi
<natefinch> sinzui: FWIW, I can run run that curl using the same flags etc from a generic GCE Centos7 VM to a server deployed from master's Juju... so not sure what's different about the CentOS that I'm running vs. what CI is deploying.
<sinzui> natefinch: yeah. I don't know either. that last two I tried to deploy just failed to come up. I need to look into the health ot the maas.
<sinzui> natefinch: I could just re-run the failing job with --keep-env so that we can get tot the actualy machine that failed
<natefinch> sinzui: that would be useful
 * sinzui starts job
<mup> Bug #1576728 changed: ConnectSuite.TestLocalConnectError: windows cannot connect to local lxd server <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by cox-katherine-e> <https://launchpad.net/bugs/1576728>
<sinzui> natefinch: I am in the centos instance on the maas. I see
<sinzui> curl --version
<sinzui> curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.15.4 zlib/1.2.7 libidn/1.28 libssh2/1.4.3
 * sinzui prepares connection info
<natefinch> sinzui: that is a slightly different version, at least of NSS
<sinzui> natefinch: yeah. almost got you the info
<sinzui> natefinch: maas centos images have a different origin. They officially come from maas
<sinzui> by way of something
<sinzui> natefinch: check you email for connection info
<natefinch> sinzui: thanks
<sinzui> natefinch: ha ha. since munna can only access ubuntu/canonical machines. It cannot get updates. I wonder if curl has an update, but we cannot get it
<sinzui> natefinch: damn, I can see the centos images in the maas are current from http://images.maas.io/ephemeral-v2/daily/
<natefinch> gah, is there a way to get the model UUID from juju?
<natefinch> usually I just peel it off the instanceID, but I guess maas doesnt' do that
<natefinch> doesn't matter... I get error even with a bad URL, makes sense.
<natefinch> sinzui: gotta run for lunch, back in an hour
<natefinch> sinzui: I'm logged in, and would like to continue after lunch, but feel free to kick me off if you need to
<sinzui> natefinch: The machines are yours for now. I don't think CI will miss them today
<marcoceppi> natefinch-lunch: what you need for deployer?
<redir> ericsnow: got a second to explain your last comment on that review?
<ericsnow> redir: sure
<redir> k. I'll be in moonstone when you get to a stopping point.
<natefinch> marcoceppi: detailed in #juju, but essentially core disabled everything but TLS1.2 and the python deployer chokes on that for some reason
<natefinch> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1576695
<mup> Bug #1576695: Deployer cannot talk to Juju2 (on maas2) because :tlsv1 alert protocol version <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1576695>
<natefinch> sinzui: it's definitely the NSS version... the version oni the CI machine is from April 2014.  The one on my GCE instance is from June 2015.  In between there, TLS 1.1 and 1.2 got enabled by default... they weren't before that
<sinzui> natefinch: do we need to report a bug against maas. the images crom from them
<sinzui> wow, maas centos images are 2 years stale?
<natefinch> sinzui: I have no idea if their image is "incorrect"
<natefinch> sinzui: this one library seems significantly out of date in a way that happens to screw us
<sinzui> natefinch: I think an old image is being adopted. I am going to have a chat with some parties
<natefinch> sinzui: if we add  --ciphers ecdhe_rsa_aes_256_sha --tlsv1 to the curl command, it works
<natefinch> (in theory we could add all the ciphers that the server supports, but I know it supports that one so I just chose one)
<sinzui> natefinch: oh, nice, I am still hoping for a talk with others about fresh images. Surely something else will fail this year
<natefinch> sinzui: defaulting to having tls 1.1 and 1.2 disabled is kind of crazy.
<sinzui> natefinch: agreed
<mup> Bug #1577524 opened: Error calling ''lxd forkstart juju-machine-2-lxd-0 /var/lib/lxd/containers             /var/log/lxd/juju-machine-2-lxd-0/lxc.conf'': err=''exit status 1'' <ci> <deploy> <intermittent-failure> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1577524>
<mup> Bug #1577524 changed: Error calling ''lxd forkstart juju-machine-2-lxd-0 /var/lib/lxd/containers             /var/log/lxd/juju-machine-2-lxd-0/lxc.conf'': err=''exit status 1'' <ci> <deploy> <intermittent-failure> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1577524>
<mup> Bug #1577524 opened: Error calling ''lxd forkstart juju-machine-2-lxd-0 /var/lib/lxd/containers             /var/log/lxd/juju-machine-2-lxd-0/lxc.conf'': err=''exit status 1'' <ci> <deploy> <intermittent-failure> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1577524>
<marcoceppi> natefinch: lp:juju-deployer is the code
<thumper> fwereade_: hey, I'm around early if you want to jump on the hangout
<natefinch> sinzui: do you know what version of python is running for the deployer tests?  Do we log that anywhere? I see 2.7... but 2.7.what?
<natefinch> sinzui: sounds like some older versions of python 2.7 don't have TLS 1.2 support
<natefinch> (yay for runtime dependencies :/)
<sinzui> natefinch: as always, that depends on the ubuntu version of the host. We test deployer with xenia, wily, trusty
<natefinch> sinzui: np, I'll check.... I'm guessing trusty comes with 2.7.6
<natefinch> sinzui: yep, that's it
<sinzui> natefinch: trusty is Python 2.7.6. xenial is 2.7.11+
<natefinch> fantastic
<natefinch> Python 2.7.6 was released on November 10, 2013
<natefinch> sinzui: is there any way the deployer can be made to require a newer version of python?
<sinzui> natefinch: That is unlikely. deployer in trusty needs to work with trusty
<natefinch> sinzui: and trusty can't be updated with a version of python newer than 2.5 years old?
<sinzui> natefinch: Security updates are made from time to time. this is the reality that users have https://launchpad.net/ubuntu/+source/python2.7
<natefinch> sinzui: I'd call "not having support for tls 1.2" a security issue :/
<alexisb> natefinch, if there is an issue with deployer we should open a bug against it
<natefinch> sinzui: sorry, don't mean to be grumpy
<natefinch> alexisb: yep
<alexisb> we have plenty of our own bugs to work
<alexisb> marcoceppi and team are more then capable :)
<natefinch> alexisb: really, it's just the version of python on trusty that is the problem
<sinzui> natefinch: https://launchpad.net/ubuntu/+source/python3.4 is in trusty. If that is suitable, then deployer needs to require it
<natefinch> alexisb: maybe there's a code work around, I dunno
<alexisb> I am just reading back scroll but it looks to me that deployer needs to learn about the right version of python in trusty?
<sinzui> alexisb: only if the right version is in trusty
<natefinch> alexisb: well, it sounds like their choices are 2.7.6, which is flawed, or 3.x ... which may be a non-trivial change
<alexisb> natefinch, either way looks like a bug against deployer needs to be opened so discussion can start there and we can get the right eyes on the problem
<mup> Bug #1577550 opened: juju fails to provision machine and will not retry <juju-core:New> <https://launchpad.net/bugs/1577550>
<natefinch> alexisb: absolutely
<natefinch> sinzui: should we move the current bug to deployer? https://bugs.launchpad.net/juju-core/+bug/1576695
<mup> Bug #1576695: Deployer cannot talk to Juju2 (on maas2) because :tlsv1 alert protocol version <ci> <deployer> <maas-provider> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1576695>
<natefinch> sinzui: btw, pretty sure the fact it's on Maas is a red herring
<sinzui> natefinch: I think we will add them so that we can track the issue too.
<natefinch> sinzui: sounds good
<sinzui> what
<natefinch> sinzui: just saying, the bug title mentions maas2, but I think that's not an interesting data point, other than it's trusty
<sinzui> natefinch: something is amiss. The bug is about deployer 2.x, not maas 2.
 * sinzui fixes bug and issue
<mgz> there are two bugs
<mgz> one is maas centos images
<mgz> one is python 2.7.6
<natefinch> ...and one is windows that I'm just starting to look at, and I don't really understand what the failure is say: http://reports.vapour.ws/releases/3935/job/maas-1_9-deploy-centos-amd64/attempt/364#highlight
<natefinch> mgz: those two bugs you mentioned are filed separately
<mgz> natefinch: I know, but they were the two I was just looking at
<natefinch> mgz: ahh ok
<mgz> natefinch: windows is likely similar issue to centos, but for some reason we don't have the cloud-init logs from the machine
<ericsnow> katco: you were right that it looked like the same thing:  http://reviews.vapour.ws/r/4752/
<mgz> the winrm log collection times out trying to get in
<mgz> it worked when the test was passing
<natefinch> mgz: maybe also a python 2.7 problem?
<katco> ericsnow: awesome!
<mgz> natefinch: yeah, could be, not sure what version the image includes
<ericsnow> katco: quick review?
<katco> ericsnow: sure sec
<mgz> the last successful run cloudbase-init log is has rather a lot of non-confidence inspiring tracebacks
<katco> ericsnow: is the diff reversed? you removed your time.Sleep?
<mgz> we probably want to regenerate the windows image in our maas anyway, but I'm not sure if tls stuff is fixed in newer cloudbase bits
<ericsnow> katco: moved it over to the common helper
<katco> ericsnow: ahh
<mgz> our image has 0.9.8.dev74, there's a 0.9.9 at least
<katco> ericsnow: ship it
<ericsnow> katco: thanks
<natefinch> mgz: any thoughts on how I can debug the windows problem, or are you willing to look into that?
<mgz> natefinch: with the machine setup failing hard enough to break winrm log collection it's hard
<mgz> we could boot one and leave it up, see if it's possible to get in manually
<natefinch> mgz: sounds good
<mgz> sinzui: ^do you remember if it's possible to rdp into a maas-booted windows image somehow?
<sinzui> mgz: I think it is possible but the path is mad. I think we need to sshutlle through ci-gateway => munna like we do to see maas via https. I expect the maas to be on at least one of the networks we would try to access the window's instance
<sinzui> mgz: oh, that assumes we know the Administrator password or ubuntu if we create an ubuntu user
<mgz> sinzui: yeah, I'm not sure I've actually tried it before
<mgz> sinzui: it seems regardless we want updated images, which is a whole process
<mgz> but we did write that down
<mgz> natefinch: see addDownloadToolsCmds in cloudconfig/userdatacfg_win.go for how the tools are being fetched
<natefinch> mgz: thanks
<mgz> which boils down to System.Net.Http.HttpClient
<sinzui> mgz: I think http://wiki.cloudbase.it/maas have been used
<sinzui> mgz: I recall this was also tried https://maas.ubuntu.com/docs/os-support.html
<natefinch> mgz: looks like it's probably an easy enough fix.... I think it's just a matter of setting it to try TLS 1.2 first
<mgz> natefinch: that might mean a patch to cloudbase code outside of juju though
<mgz> going by some stackoverflow bits that suggest Tls11 and Tls12 are not in the default list
<mgz> hm, maybe that can be done by juju passed cloudinit steps too
<natefinch> mgz: I think so
<natefinch> mgz: seems like we can add it right into our code, just one line
<mgz> probably another "do it in both places for now" thing
<natefinch> yeah
<natefinch> gotta go, dinner time
<mgz> natefinch-afk: if you come up with a speculative branch, we can run the test with a binary and see
<sinzui> mgz: I think we also need to ask what to azure win images have?
<mgz> sinzui: I guess, but we don't have an azure windows deploy test
<mgz> this is also going to be windows version dependent as the default behaviour changes with .NET releases
<mup> Bug #1577567 opened: relation output in juju status is ambiguous <juju-core:New> <https://launchpad.net/bugs/1577567>
<mup> Bug #1577568 opened: juju 1.25.5 problems with bonded nics <landscape> <juju-core:New> <https://launchpad.net/bugs/1577568>
<mup> Bug #1577569 opened: 1.25.5: failed to retrieve the template to clone -  error executing  "lxc-start" <oil> <juju-core:New> <https://launchpad.net/bugs/1577569>
<sinzui> mgz: We don't have an windows streams for azure to tests. Could we contive to use the azure-arm's  inbuilt support for windows? It might take 30 minutes to setup such a job
<mgz> sinzui: probably, worth poking axw about it
<mup> Bug #1577587 opened: Status public members should not be preceded by Status <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577587>
<mup> Bug #1577589 opened: Valid in status package needs signature change. <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577589>
<mup> Bug #1577590 opened: Status History Logs Squasher needs extra testing. <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577590>
<mup> Bug #1577593 opened: status sitory pruner needs to remove only once <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577593>
<mup> Bug #1577593 changed: status sitory pruner needs to remove only once <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577593>
<mup> Bug #1577593 opened: status sitory pruner needs to remove only once <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577593>
<mup> Bug #1577594 opened: params.Status->status.Status should happen in status history api layer not command <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577594>
<mup> Bug #1577594 changed: params.Status->status.Status should happen in status history api layer not command <tech-debt> <juju-core:Invalid by hduran-8> <https://launchpad.net/bugs/1577594>
<mup> Bug #1577598 opened: Use testing.FakeHomeSuite instead of utils.SetHome(). <juju-core:New> <https://launchpad.net/bugs/1577598>
#juju-dev 2016-05-03
 * redir is going EoD RSN
<mup> Bug #1577550 changed: juju fails to provision machine and will not retry <juju-core:New> <https://launchpad.net/bugs/1577550>
<mup> Bug #1577569 changed: 1.25.5: failed to retrieve the template to clone -  error executing  "lxc-start" <oil> <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1577569>
<mup> Bug #1576805 changed: Juju controllers set the rsyslog NetstreamDriver affecting all subsequent rsyslog configuration <juju-core:Invalid> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1576805>
<mup> Bug #1577606 opened: StatusInfo has a time.Time pointer, shoult not be a pointer. <tech-debt> <juju-core:New for hduran-8> <https://launchpad.net/bugs/1577606>
<natefinch> oh weird.... bootstrapping on gce, I get a log message saying "Creating Juju controller "local.google" on google/us-east1" but it's actually created in us-central1
<mup> Bug #1577609 opened: juju switch to non-existent controller gives wrong error <juju-core:New> <https://launchpad.net/bugs/1577609>
<mup> Bug #1577614 opened: Bootstrapping GCE lies about the availability zone of the controller <juju-core:New> <https://launchpad.net/bugs/1577614>
<thumper> axw_: ping
<axw_> thumper: pong
<axw_> sorry, missed message
<thumper> axw: np, was wanting to talk to you about storage entities ...
<thumper> however I need to take my daughter to BJJ shortly
<thumper> but I could drop and come straight back to chat if you can
<axw> thumper: sure, I can chat
<thumper> in about 25 minutes?
 * thumper wonders if Maia has remembered
<axw> thumper: no worries, prod me when you're back
<thumper> kk
<thumper> axw: back
<axw> thumper: ok, gotta make a hangout
<axw> thumper: https://plus.google.com/hangouts/_/canonical.com/juju-storage?authuser=1
<mup> Bug #1577638 opened: Can't juju ssh to lxd-placed container <juju-core:New> <https://launchpad.net/bugs/1577638>
<dimitern> morning
 * dimitern thinks it might be time to change my ISP :/
<anastasiamac> dimitern: o/
<dimitern> anastasiamac: \o
<anastasiamac> dimitern: how was easter?
<dimitern> anastasiamac: as usual :) I got too much to eat
<anastasiamac> dimitern: and isn't it a great feeling? \o/
<dimitern> anastasiamac: ahh it is :)
<dimitern> frobware: ping
<babbageclunk> dimitern: stink about the internet bro - hope you get it sorted out soon!
<dimitern> babbageclunk: well, they did fix it, but kept me on hold for 20m
<babbageclunk> dimitern: grrr, but yay!
<dimitern> their support is a joke, it's good I rarely had to contact them
<TheMue> morning
<dimitern> axw: hey, are you still around?
<axw> dimitern: sure am
<dimitern> axw: I've noticed you've landed the changes in provider/ec2 re filtering instances purely based on tags
<dimitern> axw: I'm trying to integrate that into my fix for allowing non-default VPC to be used
<axw> dimitern: apart from AllInstances, which filters on security groups
<dimitern> axw: ah, so I'll merge the changes there
<axw> dimitern: yep, should just be in one place now
<dimitern> axw: how did you get around the non-idempotent nature of adding tags at start instance time?
<axw> dimitern: which non-idempotency issue would that be?
<axw> dimitern: the one we talked about last was to do with using ClientToken, but I didn't end up using that
<voidspace> do you mean non-atomic?
<dimitern> voidspace: that's it :)
<axw> dimitern voidspace: it's still non-atomic, which is why I'm using security groups in that one spot
<dimitern> axw: ok, makes sense
<axw> dimitern: we only *need* it in AllInstances, because that's where we need to list instances to be cleaned up by the provisioner
<voidspace> dimitern: babbageclunk: frobware: doorbell - will be standup shortly
<axw> dimitern: if we didn't tag it, it's busted anyway, so no point returning it from anything else
<dimitern> axw: right
<dimitern> axw: ok, thanks I'll take it from here :)
<thumper> babbageclunk: are you in the standup?
<babbageclunk> thumper: oops, didn't notice that because I was in the standup
<babbageclunk> voidspace, dimitern: review please? https://github.com/juju/gomaasapi/pull/54
<babbageclunk> voidspace, dimitern: I'm going to go through the MAAS provider and make sure we can deal with machine.BootInterface() or interface.VLAN() being nil.
<dimitern> babbageclunk: looking
 * babbageclunk is out - meeting someone for lunch
<jcastro> http://askubuntu.com/questions/766851/how-do-i-use-a-feature-flag/76685
<jcastro> can someone peer review my answer please?
<dimitern> jcastro: looks correct
<babbageclunk> dimitern: is the maas networking interlock happening?
<babbageclunk> dimitern: Thanks for the review, by the way!
<dimitern> babbageclunk: oops, I forgot - joining now
<voidspace> babbageclunk: cool
<mup> Bug #1577776 opened: 2.0b6: asks for domain-name, then doesn't know what it is <landscape> <juju-core:New> <https://launchpad.net/bugs/1577776>
<mup> Bug #1577776 changed: 2.0b6: asks for domain-name, then doesn't know what it is <landscape> <juju-core:New> <https://launchpad.net/bugs/1577776>
<mup> Bug #1577776 opened: 2.0b6: asks for domain-name, then doesn't know what it is <landscape> <juju-core:New> <https://launchpad.net/bugs/1577776>
<mup> Bug #1576704 changed: MigrationExportSuite.TestUnits unequal results <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1576704>
<mup> Bug #1576913 changed: StatusHistorySuite.TestPruneStatusHistory <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by ericsnowcurrently> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1576913>
<voidspace> anyone seen this?
<voidspace> mongo/open.go:116: undefined: tls.TLS_RSA_WITH_AES_128_GCM_SHA256
<voidspace> ah, wrong version of go probably
<babbageclunk> voidspace, dimitern: If iface.VLAN() is nil at https://github.com/juju/juju/blob/master/provider/maas/interfaces.go#L249
<dimitern> babbageclunk: then VLANTag should be 0
<babbageclunk> what should happen? Skip this interface in the loop? Use 0 as the tag? Error out?
<babbageclunk> dimitern: ok, awesome, thanks!
<voidspace> babbageclunk: how can machine.InterfaceSet() return a nil interface?
<voidspace> babbageclunk: I'm pretty sure it can't
<babbageclunk> voidspace: It can return an interface for which the VLAN is null (or will be able to in the near future).
<voidspace> babbageclunk: ah, VLAN is nil
<voidspace> babbageclunk: sure
<voidspace> I misread
<babbageclunk> voidspace: :)
<mup> Bug #1577568 changed: juju 1.25.5 problems with bonded nics <landscape> <juju-core:Invalid> <juju-core 1.25:Invalid> <https://launchpad.net/bugs/1577568>
<mup> Bug #1577798 opened: azure 400 bad request creating networks <azure-provider> <blocker> <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1577798>
<babbageclunk> voidspace, dimitern: review for the Juju side of the nullable/optional fields work? http://reviews.vapour.ws/r/4759/
<dimitern> babbageclunk: LGTM
<babbageclunk> dimitern: ta!
<voidspace> babbageclunk: \o/
<dooferlad> dimitern / voidspace / babbageclunk: Please take a look: http://reviews.vapour.ws/r/4758
<dooferlad> it is mostly a forward port. The only change vs 1.25 is https://github.com/juju/juju/pull/5333/commits/4fe1310661e8256164ab5039a0079fc049060d84
<dooferlad> so just take a look at that.
<babbageclunk> dooferlad: Ha ha, took me moment to click that I was reading Python not Go.
<dooferlad> babbageclunk: :-)
<dimitern> dooferlad: LGTM - I'm happy the bridge script shows routes once more btw - it was a pain to debug without it
<dooferlad> dimitern: thanks
<mup> Bug #1577816 opened: Deploy of cs:trusty/ubuntu gives me xenial <landscape> <juju-core:New> <https://launchpad.net/bugs/1577816>
<mup> Bug #1577556 opened: mysql charm is failing to get unit-get private-address in the install hook <juju-core:New> <mysql (Juju Charms Collection):Fix Released> <https://launchpad.net/bugs/1577556>
<babbageclunk> dimitern: If I want to install juju-mongodb3.2 do I have to have xenial installed?
<dimitern> babbageclunk: I suspect so, but I don't know for sure
<babbageclunk> dimitern: :/ I guess upgrading might be the thing to do now anyway.
<dimitern> babbageclunk: +1
<perrito666> babbageclunk: yes, iirc there is no mongo 3.2 package for earlier versions of ubuntu
<babbageclunk> ok, thanks perrito666.
<ejat> hi .. is there any inside the documentation brieft how to get MS Azure credential
<ejat> brief*
<natefinch> uh, wow, we just let you set whatever values you want in the model config? There's no check if something is a valid configuration value?
<ejat> natefinch: u mean in credentials.yaml ?
<xnox> frobware, hi
<mup> Bug #1577816 changed: Deploy of cs:trusty/ubuntu gives me xenial <landscape> <juju-core:New> <https://launchpad.net/bugs/1577816>
<alexisb> katco, looks like this bug just got a lot easier :) see urulama latest comment: https://bugs.launchpad.net/juju-core/+bug/1576509
<mup> Bug #1576509: Race in macaroon-bakery <blocker> <ci> <race-condition> <regression> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1576509>
 * katco looks
<katco> alexisb: nice! :) should be a small update to dependencies.tsv :D
<alexisb> whoo hoo for an easy win thanks to rogpeppe1 !
<rogpeppe1> alexisb: :)
<ejat> marcoceppi: ERROR autorest:WithErrorUnlessStatusCode POST https://login.microsoftonline.com/fb30bf07-xxxx-xxxx-b9a8-02ef08680fb9/oauth2/token?api-version=1.0 failed with 400 Bad Request
<natefinch> juju status doesn't show containers?
<natefinch> boggle
<natefinch> (tabular that is, they're shown in yaml)
<mup> Bug #1577900 opened: juju set-model-config's UX is very confusing <juju-core:New> <https://launchpad.net/bugs/1577900>
<tych0> frobware: hi
<frobware> https://bugs.launchpad.net/juju-core/+bug/1242783
<mup> Bug #1242783: containers should use constraints to configure cgroups/kvm values <constraints> <local-provider> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1242783>
<frobware> tych0: https://bugs.launchpad.net/juju-core/+bug/1242783
<tych0> frobware: ah
<tych0> so this has been around for a while :)
<mup> Bug #1577900 changed: juju set-model-config's UX is very confusing <juju-core:New> <https://launchpad.net/bugs/1577900>
<natefinch> am I crazy... didn't juju ssh used to work for lxc containers juju creates?
<natefinch> ericsnow, katco, cherylj ^
<perrito666> dont remember, but it should
<katco> natefinch: i think it did
<perrito666> you should be able to ssh into an lxc container
<natefinch> well, you can't
<natefinch> $ juju ssh 1/lxc/0
<natefinch> ssh: connect to host 10.0.3.88 port 22: No route to host
<perrito666> bug?
<natefinch> tried on aws and gce
<perrito666> natefinch: pastrebin your ifconfig
<perrito666> ah that not on your machine?
<perrito666> actually I changed my mind, paste ifconfig and lxc list
 * perrito666 smells something fishy there
<natefinch> perrito666:  yeah, not lxd provider, lxc containers on real cloud machines
<perrito666> ohh, I see what is going on there
<perrito666> that IS a bug
<natefinch> reporting
<perrito666> voidspace: any input on that?
<cherylj> natefinch: https://bugs.launchpad.net/juju-core/+bug/1577638/comments/3
<mup> Bug #1577638: Can't juju ssh to lxd-placed container <juju-core:Incomplete> <https://launchpad.net/bugs/1577638>
<perrito666> ah I see
<perrito666> well that is poor ux, juju ssh should know how to jump through the hoops
<natefinch> yeah, that's half the reason juju ssh exists, is to jump through hoops for you
<cherylj> natefinch: this was a security concern - allowing users who don't have access to the admin model
<perrito666> cherylj: interesting
<natefinch> man we're bad at UX
<perrito666> cherylj: actually you could ssh into the container by going through the host
<perrito666> or you could actually do proper port binding there and get the 22 exposed
<natefinch> cherylj: can we set the status to something other than incomplete? the bug seems well-enough specified.  I'd prefer we raised this up as a fairly major UX problem, and regression from 1.x
<alexisb> katco, ericsnow, one of you guys available for technical back up  on an sts q?
<ericsnow> alexisb: sure
<alexisb> https://plus.google.com/hangouts/_/canonical.com/juju-core-sts
<alexisb> ericsnow, ^^
<mup> Bug #1577939 opened: Backup-restore failed on xenial because service "juju-db": No such file or directory <backup-restore> <blocker> <ci> <xenial> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1577939>
<mup> Bug #1577939 changed: Backup-restore failed on xenial because service "juju-db": No such file or directory <backup-restore> <blocker> <ci> <xenial> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1577939>
<mup> Bug #1577939 opened: Backup-restore failed on xenial because service "juju-db": No such file or directory <backup-restore> <blocker> <ci> <xenial> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1577939>
<mup> Bug #1577945 opened: Bootstrap failed: DNS/routing misconfigured on maas 1.8 <blocker> <bootstrap> <ci> <maas-provider> <network> <regression> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1577945>
<mup> Bug #1577949 opened: windows services cannot upgrade to 1.25.6 <blocker> <ci> <regression> <upgrade-juju> <windows> <juju-core:Incomplete> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1577949>
<perrito666> wallyworld: is standup happening anyway?
<wallyworld> perrito666: sure, why not
<anastasiamac> wallyworld: perrito666: why would standup not b happening?
<perrito666> well we had another meeting happening at that time and then we changed it
<anastasiamac> wallyworld: perrito666: oh i have release/bug updates for standup :D
<perrito666> I dont think we can push it back 20 mins or so?
 * perrito666 looks at wallyworld anastasiamac and axw 
<wallyworld> perrito666: you want to delay? fine with me
 * axw looks back
<axw> I can delay a bit
<perrito666> k bbh
<wallyworld> perrito666: axw: there's 3 in the call as the others didn't see the backscroll; axw want to jump in and perrito666 can join us when ready
<axw> wallyworld: okey dokey
<perrito666> I am here
<redir> ericsnow: yst?
<perrito666> wallyworld: https://bugs.launchpad.net/juju-core/+bug/1576313
<mup> Bug #1576313: windows: uniter tests fail because logs get dumped to stderr <blocker> <ci> <regression> <test-failure> <windows> <juju-core:In Progress by hduran-8> <https://launchpad.net/bugs/1576313>
<axw> redir: wallyworld: to juju-core actually, maybe reed's not on it
 * redir looks
<anastasiamac> redir: i forwarded it to u directly \o/
<redir> axw: wallyworld I don't appear to be on a juju-core list.
<redir> anastasiamac: tx:)
<anastasiamac> redir: and i think that u r on the mailing list (i can see u)
<anastasiamac> :D
<redir> hmmm
<anastasiamac> it's going to ur namesurname.com
<redir> anastasiamac: I think it is a bug:)
<anastasiamac> bug = user error?
<redir> I have recieved no emails from that from address...
<anastasiamac> redir: taking it to pm
#juju-dev 2016-05-04
<mup> Bug #1577988 opened: Revert destroy service when machine is off <juju-core:New> <https://launchpad.net/bugs/1577988>
<wallyworld> redir: ericsnow: meeting?
<menn0> axw: ping
<axw> menn0: pong, in a meeting atm
<menn0> axw: np... let me know when you're done please
<axw> menn0: sorry, back now
<thumper> either, we suck horribly
<thumper> or the last CI test run had issues
 * thumper gives each one about 50%
<menn0> axw: np... I think I've mostly figured it out but i'll confirm with you anyway
<menn0> axw: you did the debughooks work right?
<axw> menn0: yes, long ago
<menn0> axw: I'm in the final throes of integrating the SSH host key stuff and noticed the debughooks tests don't do what they appear to
<menn0> axw: https://github.com/juju/juju/blob/master/cmd/juju/commands/debughooks_test.go
<menn0> axw: in debugHooksTests the proxy and result fields aren't used :)
<menn0> axw: I guess I just wanted be sure that I wasn't missing something before fixing
<axw> menn0: :/  they used to be
<axw> menn0: don't think you're missing anything
<menn0> axw: must have been later edits that broke the tests
<menn0> axw: I need to rework them anyway. i'm trying to get them to pass again.
<menn0> (with my changes)
<mup> Bug #1508923 changed: Support for Azure Resource Groups <azure-provider> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1508923>
<mup> Bug #1291201 changed: Azure provider does not use storage service's affinity group <azure-provider> <juju-core:Won't Fix> <https://launchpad.net/bugs/1291201>
<axw> wallyworld: I was expecting at least an "argh!" response from someone to using sync.Cond :)
<wallyworld> axw: i can live with it :-)
<wallyworld> seems to do what's needed nicely
<axw> wallyworld: yeah, it's appropriate IMO. people reach for channels because they're easy, but they're not always sufficient
<wallyworld> +1
<wallyworld> when you have a hammer, everything looks like a nail
<axw> wallyworld: were you looking at the restore-backup bug? ISTR you were looking at something in the area
<wallyworld> yeah
<axw> wallyworld: thanks for review, please see reply on BuildToolsTarball PR
<wallyworld> ok
<wallyworld> axw: yeah, i was wondering if there were more tests. it seems ok i think
<voidspace> mgz: ping
<voidspace> mgz: unping - emailed instead
<babbageclunk> voidspace, dimitern: standup? Am I in the wrong one? You guys missed the fire alarm test!
<voidspace> omw
<dimitern> oops sorry omw
<voidspace> babbageclunk: eager beaver
<voidspace> babbageclunk: keen bean
<babbageclunk> voidspace, dimitern: porting tests to MAAS2 http://reviews.vapour.ws/r/4766/
<voidspace> babbageclunk: looking
<voidspace> babbageclunk: LGTM
<babbageclunk> voidspace: thanks#!
<babbageclunk> -#
<dimitern> babbageclunk: still reviewing btw
<babbageclunk> dimitern: d'oh, sorry
<dimitern> babbageclunk: np :) I should've said I did
<babbageclunk> dimitern: luckily there are blockers anyway
<dimitern> babbageclunk: reviewed
<babbageclunk> dimitern: thanks, fixed. How do I spell jfdi for this again?
<dimitern> babbageclunk: $$__JFDI__$$
<babbageclunk> dimitern: Thanks
<dimitern> babbageclunk: thanks for chasing all those CI issues so quickly :)
<babbageclunk> dimitern: :)
<babbageclunk> hmm - godef stopped working for stdlib packages after upgrading to xenial. Any hints for how to make it work again?
<dimitern> babbageclunk: what error are you getting?
<babbageclunk> dimitern: Just eg "godef: no declaration found for sort.Strings"
<babbageclunk> dimitern: still works fine for local types and cross-package ones.
<dimitern> babbageclunk: ah, you might have GOROOT set (or unset - can't remember what was supposed to work)
<dimitern> babbageclunk: I use GOROOT=/usr/lib/go-1.6 and that works with golang-1.6 from xenial/main
<babbageclunk> dimitern: thanks, I'll try that. Wonder why it stopped.
<babbageclunk> dimitern: yay, thanks! Worked great.
<dimitern> babbageclunk: \o/ :)
<babbageclunk> dimitern: man, felt like I had a phantom limb for a bit.
<dimitern> babbageclunk: re godef not working? :)
<babbageclunk> dimitern: yeah, just got so used to hitting a key to pull up the type of a thing that when it stopped I felt like I had an itch in a place that I didn't have a place anymore.
<dimitern> babbageclunk: I know what you mean - I use and depend on godef all the time; I only wish it worked better for stuff in export_test.go :/
<dimitern> babbageclunk: I have M-. bound to godef-jump and C-M-. to godef-jump-other-window
<babbageclunk> dimitern: Man, I love godef-jump-other-window
<dimitern> babbageclunk: another very useful package to add is go-eldoc btw - it shows type info in the minibuffer
<babbageclunk> dimitern: ooh, just as you move around? I've been hitting C-c C-d for that, but something that was always there would be nicer.
<dimitern> babbageclunk: yeah, e.g. with point at 'errors.IsNotFound' it shows 'errors.IsNotFound: func(err error) bool'
<dimitern> (assuming errors is already imported)
<dimitern> babbageclunk: most of the tricks in http://dominik.honnef.co/posts/2013/03/writing_go_in_emacs/ and its follow-up are good to know
<babbageclunk> dimitern: nice, thanks - I've got go-mode set up, but not all of these.
 * dimitern steps out for ~1h
 * dimitern is back
<mgz> dimitern: `juju help add-subnet` mentions `juju create-subnet` - which doesn't exist?
<dimitern> mgz: oh, yeah - it's under a feature flag "post-net-cli-mvp", but the help doc should mention it
<dimitern> s/should/shouldn't/
<mgz> okay, that's a new flag to me
<mgz> should it work on aws/maas at present? (create-)
<dimitern> mgz: no, it doesn't work.. well, the flag it changes a few things if enabled, like registering extra networking commands, but the commands themselves are not fully implemented (end-to-end)
<alexisb> jam, fwereade ping
<alexisb> jam and fwereade officially owe me a beer at the sprint
<alexisb> voidspace, babbageclunk, dimitern, mgz, again really well done on the maas 2.0 stuff!
<dimitern> alexisb: indeed! it's exciting to see all coming together so quickly!
<babbageclunk> Gah, does anyone know why strace -c -f is giving me negative times?
<mup> Bug #1576509 changed: Race in macaroon-bakery <blocker> <ci> <race-condition> <regression> <test-failure> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576509>
<mup> Bug #1576527 changed: listSuite.TestListJSON got null (showSuite too) <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1576527>
<mup> Bug #1576911 changed: github.com/juju/juju/environs/bootstrap timeout (sigquit) <blocker> <centos> <ci> <go1.6> <regression> <test-failure> <unit-tests> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576911>
 * babbageclunk is out code clubbing.
<mup> Bug #1576509 opened: Race in macaroon-bakery <blocker> <ci> <race-condition> <regression> <test-failure> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576509>
<mup> Bug #1576527 opened: listSuite.TestListJSON got null (showSuite too) <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1576527>
<mup> Bug #1576911 opened: github.com/juju/juju/environs/bootstrap timeout (sigquit) <blocker> <centos> <ci> <go1.6> <regression> <test-failure> <unit-tests> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576911>
<mup> Bug #1576509 changed: Race in macaroon-bakery <blocker> <ci> <race-condition> <regression> <test-failure> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576509>
<mup> Bug #1576527 changed: listSuite.TestListJSON got null (showSuite too) <blocker> <ci> <go1.6> <regression> <test-failure> <unit-tests> <windows> <juju-core:Fix Released by ericsnowcurrently> <https://launchpad.net/bugs/1576527>
<mup> Bug #1576911 changed: github.com/juju/juju/environs/bootstrap timeout (sigquit) <blocker> <centos> <ci> <go1.6> <regression> <test-failure> <unit-tests> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576911>
<voidspace> alexisb: thanks
<mup> Bug #1578237 opened: lxd Raw response debug logging too verbose <jujuqa> <logging> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1578237>
<natefinch> bogdanteleaga: got a minute?
<natefinch> or gsamfira?
<bogdanteleaga> natefinch, what's up
<natefinch> bogdanteleaga: I changed jujud's TLS configuration to only support TLS 1.2 last week... however, this is proving a problem for windows machines trying to download tools.  Seems like the .Net WebClient doesn't always default to having TLS 1.2 enabled.
<natefinch> bogdanteleaga: I think it's something we can enable in the cloudconfig script we send up, but wanted to get your input on it... this code specifically: https://github.com/juju/juju/blob/master/cloudconfig/userdatacfg_win.go#L197
<bogdanteleaga> yup, noticed it yesterday, got a fix, but still testing it across windows versions
<natefinch> bogdanteleaga: oh, awesome. Thanks!
<mup> Bug #1537153 changed: juju deploy --config option ignored when deploying a bundle <2.0-count> <juju-release-support> <juju-core:Invalid> <https://launchpad.net/bugs/1537153>
<mup> Bug #1565872 changed: Juju needs to support LXD profiles as a constraint <adoption> <juju-release-support> <lxd> <juju-core:Triaged> <https://launchpad.net/bugs/1565872>
<mup> Bug #1578254 opened: Race in apiserver/common and apiserver/proxyupdater <blocker> <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1578254>
<katco> natefinch: standup time
<perrito666> does anyone have the critical bugs link?
<natefinch> perrito666: blockers: https://bugs.launchpad.net/juju-core/+bugs?field.tag=blocker+
<perrito666> tx
<perrito666> almost everything is in progress sweeeeet
<mup> Bug #1554819 changed: juju help create-model is misleading <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1554819>
<mup> Bug #1554819 opened: juju help create-model is misleading <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1554819>
<perrito666> does anyone oppose at me taking https://bugs.launchpad.net/juju-core/+bug/1578254 ?
<mup> Bug #1578254: Race in apiserver/common and apiserver/proxyupdater <blocker> <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1578254>
<mup> Bug #1554819 changed: juju help create-model is misleading <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1554819>
<mup> Bug #1561212 changed: register logic can lead to user lockout <2.0-count> <docteam> <juju-release-support> <juju-core:Fix Released> <https://launchpad.net/bugs/1561212>
<mup> Bug #1576313 changed: windows: uniter tests fail because logs get dumped to stderr <blocker> <ci> <regression> <test-failure> <windows> <juju-ci-tools:Triaged> <juju-core:Won't Fix by hduran-8> <https://launchpad.net/bugs/1576313>
<natefinch> perrito666: go for it.  I was going to look at it, but hadn't gotten there yet
<perrito666> sorry
 * perrito666 notices that all our criticals are in progres :D go team
<natefinch> perrito666: it's all the same to me
<natefinch> \o/
<natefinch> sinzui: did you get in touch with the MAAS guys about the centos7 image?
<sinzui> natefinch: oh, sorry, they never replied. I will poke again
<natefinch> sinzui: do you mind if I split https://bugs.launchpad.net/juju-core/+bug/1576873 into two bugs?  One for centos and one for windows?  They're likely to be fixed in different ways by different people
<mup> Bug #1576873: Juju2 cannot deploy centos or windows workloads on maas 1.9 <blocker> <centos> <ci> <maas-provider> <regression> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1576873>
<mgz> natefinch: that sounds sensible
<natefinch> mgz: there's no built-in like duplicate this bug or split this bug into two or anything, is there?
<natefinch> mgz: nevermind, copy and paste is quick enough
<mgz> natefinch: yeah, that should do
<natefinch> there - https://bugs.launchpad.net/juju-core/+bug/1578286
<mup> Bug #1578286: Juju2 cannot deploy windows workloads on maas 1.9  <blocker> <ci> <maas-provider> <regression> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1578286>
<natefinch> also edited the original to remove the windows-specific stuff
<mgz> thanks nate
<mup> Bug #1578286 opened: Juju2 cannot deploy windows workloads on maas 1.9  <blocker> <ci> <maas-provider> <regression> <windows> <juju-core:In Progress by natefinch> <https://launchpad.net/bugs/1578286>
<redir> ericsnow: I'm here, give me a few minutes and I am ready
<ericsnow> redir: k
<rcj> Who can answer rackspace provider questions?
<redir> ericsnow: in Moonstone
<rcj> ericsnow, the rackspace development account loaned to juju from cpc has a litter of monitoring entities for old juju machines from prior deployments.  These might be old but I'd like you to check and ensure that the provider does clean these up correctly if it's not doing that already.
<rcj> ericsnow, I'm seeing them @ https://intelligence.rackspace.com/cloud/entities
<ericsnow> rcj: OTP, but you might want to check with mgz, abentley_, or sinzui
<rcj> ^ mgz, abentley_, sinzui: Can someone from Juju track this down?  I'd like to hand this off.
<mgz> rcj: what exactly is a monitoring entity?
<rcj> good question
<sinzui> rcj: my question too
<rcj> something in the UI at that spot in the webapi.  we see a bunch on the test account we loaned out for use in provider development.
<sinzui> rcj: I delete servers every few days, but not Odd_Bloke's server
<Odd_Bloke> sinzui: < me, earlier> I've never used RAX.  < rcj> You have a server in there.  < me> ... oh ...
<rcj> sinzui, not Odd_Bloke's servers.  this is just some entity, probably created at vm creation, that is hanging around presumably left by the juju rax provider given the entity "juju-<hash>-machine-#" names.  Wanted to make sure someone was aware and would look at the provider to ensure a clean user experience.
<mgz> rcj: so, we're not deliberately creating those, I guess we might get them as a side effect of creating machines?
<rcj> mgz, ack
<sinzui> Odd_Bloke: oh, it is gone now, but there was one for 2 weeks
<mgz> in which case it's unclear why the matching destroy doesn't clear them
<Odd_Bloke> sinzui: Yeah, I know; I am not a smart man.
<Odd_Bloke> :p
<mgz> rcj: I have no idea what to do about this
<mgz> it's not even clear from the docs how you configure the behaviour
<sinzui> mgz: rcj: cannot find this in the UI to get a list
<sinzui> mgz: I think we need a bug for juju and one for ourselves. We need a cleanup script for rackspace now
<sinzui> ah, rackspace inteligence
<rcj> https://intelligence.rackspace.com/cloud/entities
<mup> Bug #1541445 changed: empty uuid in cache.yaml after destroy-controller <2.0-count> <juju-release-support> <juju-core:Invalid> <https://launchpad.net/bugs/1541445>
<redir> ericsnow: same thing different branch http://reviews.vapour.ws/r/4767/
<niedbalski> sinzui, mgz I'd might need a fix on the upstart service (1.25); What's the target branch for doing a merge proposal?
<sinzui> 1.25
<sinzui> ^ niedbalski
<mup> Bug #1578327 opened: juju 1.25.5: Juju Login with invalid user tag should return CodeUnauthorized <landscape> <juju-core:New> <https://launchpad.net/bugs/1578327>
<sinzui> niedbalski: https://github.com/juju/juju/tree/1.25
<niedbalski> sinzui, yep, but afaik this is part of packaging and it seems that the debian sources are not on the same tree, right?
<sinzui> niedbalski: That is the branch we will make 1.25.6 from.
<sinzui> niedbalski: 1.25 does have different packaging rules, but to be clear NO juju has upstart or systemd rules in packaging.
<sinzui> niedbalski: jujud generates its system files on installation. So jujud supports both systemd and upstart. It chooses which files to generate based on the host OS. this is also true for windows and centos
<redir> ericsnow: tx
<niedbalski> sinzui, cool. It seems that the service definition template lives in github.com/juju/juju/service/upstart/upstart.go, thanks!.
<sinzui> fab
<sinzui> balloons: r=me
<balloons> sinzui, ack. I'll remove the tweaks to the jobs to call my code and enable the flow
<sinzui> :)
<mup> Bug #1578337 opened: no command to remove controllers <juju-core:New> <https://launchpad.net/bugs/1578337>
<mup> Bug #1578337 changed: no command to remove controllers <juju-core:New> <https://launchpad.net/bugs/1578337>
<frobware> I was running `juju upgrade-juju --upload-tools' and ran into "ERROR POST https://192.168.19.150:17070/model/63ffd949-a9c4-4319-8b70-c33d505cc7ef/tools?binaryVersion=2.0-beta7.2-xenial-amd64&series=trusty,quantal,saucy,yakkety,wily,precise,vivid,xenial,raring,utopic Forbidden Port" - is this a known issue?
<mup> Bug #1578337 opened: no command to remove controllers <juju-core:New> <https://launchpad.net/bugs/1578337>
<katco> the true test of a tiling wm... my mouse cursor disappears =|
<redir> katco: i3?
<katco> yep
<katco> redir: yep
<redir> katco: mine doesn't
<katco> redir: doesn't what?
<redir> disappear
<katco> redir: oh, no, i wouldn't expect it to. i've been using i3 for a few years, i've just triggered some kind of bug
<redir> oic
<katco> redir: and i expect it's not a bug in i3
<natefinch> my cursor disappears about once a month, but I use unity
<katco> natefinch: redir: it seems to happen when i mouse over the ubuntu terminal
<katco> natefinch: redir: not all the time, but when the bug happens that seems to be what i was doing
<redir> running the 1.x tests wipes out all open terminals or spams them with systemd messages
<natefinch> katco: easy fix, don't do that ;)
<katco> o.0
<katco> natefinch: you're absolute right. i should be using the emacs terminal ;p
<redir> katco: ahhh, I have terminator set as sensible-terminal
<natefinch> redir: yeah, the systemd thing with the 1.x tests is super annoying
<mgz> did you guys open a bug for that?
<natefinch> mgz: you mean for the mouse thing?
<mgz> I mean for running the tests doing weird things
<natefinch> mgz: I think there was one open for it.... don't remember exactly
 * redir thought it was a feature
<redir> ericsnow: another same thing different branch review http://reviews.vapour.ws/r/4768/ -- no major rush as the target is blocked.
<redir> brb reboot
<mup> Bug #1578373 opened: unblock panic: invalid memory address or nil pointer dereference <jujuqa> <panic> <unblock> <juju-core:Triaged> <https://launchpad.net/bugs/1578373>
<natefinch> moved a true unit test to a test suite that doesn't do setup or teardown, shortened test length by 2/3rds :/
<perrito666> anyone could check this very brief yet critical patch? http://reviews.vapour.ws/r/4769/
<katco> natefinch: \o/ keep doing stuff like that. we'll widdle away at this problem yet!
<katco> whittle?
<katco> i don't know how to spell that.
<perrito666> still sounds foreign to me :p
<mup> Bug #1578376 opened: Cannot add MAAS credentials through juju add-credential <juju-release-support> <maas-provider> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1578376>
<mup> Bug #1578383 opened: incorrect extension on resource upload <juju-core:New> <https://launchpad.net/bugs/1578383>
<natefinch> katco: it's just one of many reasons why I don't like testsuites... it's too easy to fall into the trap of just adding another test to the existing suite, even if it doesn't need any of the setup or teardown.
<katco> natefinch: yes. it would be much better to share functionality by making shared code functions, not methods.
<natefinch> katco: yep
<mgz> katco: I'm pretty sure 'widdle' is as in, to pee on
<katco> mgz: o.o
<katco> mgz: what do you brits know about the english language anyhow
<mgz> so unless our test suite is on fire, I'm not sure we widdling away on it will help...
<katco> mgz: ...well
<mgz> :P
<katco> mgz: it is a tire fire, so.
<natefinch> lol
<natefinch> widdling on a tire fire probably doesn't help ;)
<natefinch> especially one of this size
 * natefinch pauses widdling for dinner
<babbageclunk> So this is what you folks talk about when we go to sleep.
<babbageclunk> :)
<redir> perrito666: why can't one use %#v with tomb?
<perrito666> we think that might trigger, through reflection, reading on attributes being written in other routines
<redir> Thanks perrito666
<thumper> ugh
<thumper> bit by my first golang 1.6 issue (I'm still on 1.5)
<mup> Bug #1576695 changed: Deployer 2 on trusty cannot talk to Juju2 because :tlsv1 alert protocol version <ci> <deployer> <maas-provider> <python2.7> <regression> <juju-core:Invalid> <juju-deployer:New> <https://launchpad.net/bugs/1576695>
<mup> Bug #1576873 changed: Juju2 cannot deploy centos workloads on maas 1.9 <blocker> <centos> <ci> <maas-provider> <regression> <cloud-images:New> <juju-core:Won't Fix by natefinch> <https://launchpad.net/bugs/1576873>
<mup> Bug #1577798 changed: azure 400 bad request creating networks <azure-provider> <blocker> <bootstrap> <ci> <juju-core:Invalid> <https://launchpad.net/bugs/1577798>
<babbageclunk> hey menn0, thumper - any idea why strace starts giving me nonsense numbers when I pass -f (trace children)?
<thumper> babbageclunk: no idea sorry
<babbageclunk> Seems like maybe it's something like it's subtracting too much from the times to compensate for tracing overhead. cProfile can do that sometimes in Python land.
<menn0> babbageclunk: doesn't ring any bells for me either
<babbageclunk> :(
<alexisb> thumper, wallyworld ping
<babbageclunk> Ahh well, also passing overhead of 0 doesn't help.
<perrito666> wow, I completely forgot about the graduated reviewer thing
<perrito666> hey, could someone review http://reviews.vapour.ws/r/4769/diff ?
<wallyworld> thumper: can you join https://plus.google.com/hangouts/_/canonical.com/moonstone
<perrito666> oh great, we are in one of those "no reviewers" timezone holes :p
<axw> wallyworld: when you're free, can I please get your eyeballs on http://reviews.vapour.ws/r/4765/
<wallyworld> sure
<thumper> wallyworld: still want me/
<thumper> ?
<thumper> was making coffee
<wallyworld> thumper: all good, there was a question over payload persistence. eric will look to move it across tomorrow
<thumper> ok, cool
<axw> wallyworld: http://reviews.vapour.ws/r/4770/ another one please
<wallyworld> sure
<wallyworld> axw: ooops, i missed that in review
<axw> wallyworld: all good, it's subtle
<perrito666> wallyworld: I might need to ignore your comment, some checks might trigger a race even though is harmless
<wallyworld> perrito666: not sure how. we already access the NotifyId or whatever it is from the result params, we just need to check the Error attribute too
#juju-dev 2016-05-05
<perrito666> its the checker that does reflective stuff, ill try
<alexisb> axw, made an update to the TB call, let me know if that time sitll works
 * perrito666 loves git squash
<redir> squash perito666 +1
<perrito666> wallyworld: did you see https://github.com/juju/juju/pull/5341/files#diff-0ceede1328e8e268f09822be9fc4cf5fR70  ?
<wallyworld> perrito666: one sec, otp
<axw> alexisb: sorry went to take charlotte to school. that time is fine for me
<redir> see you tomorrow juju-dev
<perrito666> wallyworld: ping me when you are !otp
<wallyworld> perrito666: yo
<perrito666> wallyworld: yo?
<wallyworld> perrito666: a colloquial term for "hello" or "i'm here"
<perrito666> ah, also means I in spanish
<perrito666> wallyworld:  the link I passed, did you see that before asking for ernonnilless? (or you meanth something else?)
<wallyworld> didn't see the link
<wallyworld> perrito666: which link?
<perrito666> https://github.com/juju/juju/pull/5341/files#diff-0ceede1328e8e268f09822be9fc4cf5fR70
<wallyworld> perrito666: oh, didn't see that
 * perrito666 clicks fixed
<wallyworld> we want jc.ErrorIsNil though right?
<perrito666> we do, actually
<perrito666> I wonder I used isnil
<wallyworld> errorisnil covers some corner cases
<perrito666> let me try with errorisnil
<perrito666> wallyworld: the code got dizzy with everything we tried and I forgot to errorisnil that
<wallyworld> np
<perrito666> ... value *params.Error = <nil>
<perrito666> ... value of (*params.Error) is nil, but a typed nil
<perrito666> meh
<perrito666> that is a fine example of a useless error message
<perrito666> ah its the apiserver
<perrito666> its a params.Error
<perrito666> well fine people, I need sleep
<perrito666> see you all tomorrow
<thumper> fuck
<thumper> fuckity fuck
 * thumper composes email...
<thumper> wallyworld, menn0: see email for WTF moment
<menn0> looking
<wallyworld> ok
<wallyworld> thumper: +1 from me on fixing
<thumper> wallyworld: I may quickly whip up a branch that cleans this up
<thumper> the idea of a unique index on something that can be empty is werid
<thumper> wierd
<thumper> weird
<thumper> ?
<thumper> one of those
<menn0> thumper: isn't the issue though that the providerid isn't necessarily known in this case?
<thumper> does sparse mean ignore empty?
<menn0> thumper: nil values aren't stored in sparse indexes
<thumper> ok
<menn0> thumper: the index doesn't include documents where the indexed value is nil
<menn0> thumper, wallyworld: I don't know what the behaviour is when you have a compound sparse index
<menn0> docs are here: https://docs.mongodb.org/manual/core/index-sparse/#sparse-compound-indexes
<menn0> but I don't know what they mean by "ascending/descending index keys"
<thumper> Sparse compound indexes that only contain ascending/descending index keys will index a document as long as the document contains at least one of the keys.
<natefinch> axw: that bug you linked to in the mail about os.Exit in a test.... that test should just return unless you somehow have JUJU_WANT_HELPER_PROCESS set in the environment
<thumper> yeah
<axw> natefinch: it does not.
<axw> natefinch: it calls Main, and Main is calling os.Exit
<axw> natefinch: Main should return rc, and main should call os.Exit
<axw> which is what I'm changing it to do now
<natefinch> axw: I explicitly changed main *not* to os.exit
<natefinch> axw: maybe that change got overridden somehow
<natefinch> er Main that is
<axw> natefinch: blame says the line is unchanged since 2014 *shrug*
<thumper> menn0: perhaps we should be doing the provider unique check in a different way
<natefinch> axw: weird, maybe I reverted it by accident..
<menn0> thumper: I think we should.... I'm replying to the thread now
<thumper> thanks
<mup> Bug #1578456 opened: cmd/juju/commands: not all tests are being run <juju-core:In Progress by axwalk> <https://launchpad.net/bugs/1578456>
<natefinch> axw: you're right, I don't see it. That's so weird.
<natefinch> axw: oh crap, here it is: https://github.com/juju/juju/pull/5250
<axw> natefinch: heh :)
<natefinch> axw: mind if I land mine? it has some better testing in it, too
<axw> natefinch: fine with me, but I think you'll have some ssh/scp tests to fix too
<axw> natefinch: I've just run with my changes, and there's some test failures
<natefinch> axw: blech
<axw> natefinch: if you like I can press on, and you can land over the top
<axw> should be mostly the same anyway, apart from your other changes
<natefinch> axw: fine by me
<natefinch> axw: sorry to make you waste time debugging that.
<axw> natefinch: no worries, didn't take long to track down. I was mostly concerned about the tests being skipped, but thankfully there's just a couple of small ones broken
<axw> ugh, my writing has become atrocious, constantly missing words out
<thumper> axw: who needs words
<thumper> you know...
<thumper> thingy
 * axw beams thoughts
<thumper> menn0: bugger
<thumper> menn0: migration fails...
<thumper> for spaces with a provider set
<thumper> 	github.com/juju/juju/state/spaces.go:141: ProviderId "provider" not unique
<thumper> so I'm not sure it is actually working...
<menn0> thumper: nuts
 * thumper pokes and adds a bucket of logging
<thumper> ha
<thumper> it isn't the import
<thumper> but the factory creation
<thumper> WTF
<thumper> menn0: ok, need help
<thumper> got a minute?
<thumper> I *think* what I'm doing should work
<thumper> but it isn't
<menn0> thumper: 1:1?
<thumper> ack
<axw> spontaneous shutdown is one way to get me to update my kernel
<natefinch> lol
<natefinch> what is supposed to happen if you juju deploy trusty/ubuntu --series xenial? :/
<thumper> menn0: where is the manifold for the apiserver
<thumper> ?
<natefinch> why do we even have a --series on deploy?  isn't that what the foo/bar format is for?
<menn0> thumper: that's one fo the things that jesse was working on but I don't think it's done yet
<thumper> oh fark...
<menn0> thumper: I believe he got it mostly working but there were issues, or he had to wait for some other stuff to land or something
 * thumper nods
<thumper> we probably want that landed very soon
<thumper> definitely pre 2.0
 * menn0 checks jesse's repo
<menn0> thumper: here it is: https://github.com/waigani/juju/tree/MADE-apiserver
<axw> natefinch: --series is for multi-series charms. we're moving away from "ubuntu/trusty", to plain old "ubuntu"
<axw> natefinch: and IIRC you should be able to force charms to series of the same OS by using --series, wallyworld will remember better tho
<menn0> thumper: I can take a look at that once this SSH stuff is done
<thumper> menn0: ok, that's be good
<wallyworld> axw: --series and --force if needed
<natefinch> axw: it just means we have two different ways of specifying the same thing... which is confusing and in this case, causing a bug
<menn0> thumper: it's migrations related anyway
<wallyworld> eg if a charn declares it supports trusty and we want to install on xenial
<thumper> menn0: it is
<natefinch> juju deploy precise/ubuntu will deploy ubuntu on xenial
<axw> menn0: how many changes to you have in cmd/juju/commands/scp.go? I'm looking at moving off JujuConnSuite, and using a mock API
<axw> menn0: don't want to make your life difficult tho
<wallyworld> the expectation is those series specific urls are not going to be necessary, but in the interim, that charm should go to precise
<menn0> axw: i've made extensive changes to the tests so it would be good if you waited
<axw> menn0: okey dokey
<menn0> axw: I'm hoping to be done with it today
<natefinch> I can fix that, of course, but then we still have the case of juju deploy precise/ubuntu --series xenial.... I guess at that point I just have to error out, since I don't actually know what the user wants
<axw> menn0: can you please test them in isolation, because I found TestSCPCommand fails atm if you run it with others
<axw> menn0: actually I can fix that later if need be
<axw> never mind
<wallyworld> natefinch: no, in that case you require --force
<wallyworld> and yeah, error if --force is not provided
<natefinch> wallyworld: well, asking the charmstore for precise/ubuntu returns you the multi-series ubuntu charm
<wallyworld> it does yes
<natefinch> wallyworld: which can deploy to xenial just fine
<wallyworld> those series/charm urls are for backwards compatibility
<natefinch> ...so right now, it does
<wallyworld> natefinch: but the user has specified precise in the url - it they want --series xenial we need --force
<menn0> axw: by "in isolation" do you mean just run the individual suites?
<wallyworld> if they specify the charm without series in url, that's different
<axw> menn0: actually I had arse about, so forget I said anything :)   I'm finding that the scp tests fail because the output includes identity files from the fake juju home, and that's not expected in the output. running TestSCPCommand passes if I run it by itself
<axw> menn0: so it seems to not be getting a clean fake home
<menn0> axw: hmmm... I haven't noticed that
<axw> menn0: I'll fix that and forget about changing scp for now
<axw> menn0: well, cmd/juju/commands doesn't run all the tests atm. have you changed that in your branch?
<axw> menn0: i.e. because of the bug I sent to the list
<menn0> axw: no I haven't changed that, just the ssh, scp and debug-hooks tests and implementations
<axw> ok
<menn0> axw: yeah I saw that
<menn0> axw: but i've been running the individual tests/suites using -gocheck.f
<menn0> axw: did you fix that problem already?
<axw> menn0: nope, still looking into it
<axw> menn0: I'll fix that and come back to removing JujuConnSuite after you land your changes
<menn0> axw: ok thanks
<menn0> axw: one wrinkle, I can't land this stuff until master is unblocked
<axw> menn0: it's cool, no hurry
<menn0> axw: if someone is use juju scp/ssh to an arbitrary hostname or address, do you think the users personal known_hosts should be used, or just /dev/null ?
<menn0> axw: e.g. "juju ssh 10.1.2.3", as opposed to "juju ssh 3"
<anastasiamac> fwereade: o/
<anastasiamac> fwereade: lts PR was backport to 1.25... should we forward sugestions to master once the PR is ready?
<fwereade> dimitern, http://reviews.vapour.ws/r/4772/diff/1/?file=346652#file346652line93
<fwereade> anastasiamac, ooh, yes please, that would be great
<fwereade> anastasiamac, sorry I missed that
<anastasiamac> fwereade: thank you. I'll add it to the comment so that Reed keeps track of it :D
<axw> fwereade: would you kindly take a look at http://reviews.vapour.ws/r/4776/? fixes one of the critical blockers
<fwereade> axw, ack
<fwereade> axw, LGTM
<axw> fwereade: cheers
<fwereade> dimitern, http://reviews.vapour.ws/r/4734/ looks good but undertested
 * fwereade bbiab
<mup> Bug #1577798 opened: Juju gives unhelpful error when azure out of resource groups <azure-provider> <blocker> <bootstrap> <ci> <juju-core:Triaged> <https://launchpad.net/bugs/1577798>
<tych0> http://paste.ubuntu.com/16239197/
<tych0> any reason it's looking for lxdbr0 on make install?
<rick_h_> aisrael: ping
<aisrael> rick_h_: pong
<rick_h_> aisrael: got a sec for hangout please?
<aisrael> rick_h_: absolutely
<mgz> tych0: see pr #5300 - sinzui make the makefile work on a fresh install. yeah, you get kipple in your case, but the install still works, no?
<tych0> mgz: it does, it's just weird
<mgz> tych0: suggestions on making it less weird welcome
<tych0> mgz: yep :)
<mgz> tych0: l104 of Makefile - using ifconfig lxdbr0 to find out if the networking is setup, is the cause of the stderr output
<mgz> I guess just a 2>&1 would do but maybe there's something smarter
<tych0> oh, right
<tych0> because make executes the if statements when parsing the file
<tych0> mgz: yeah, that seems reasonable to me, do you want to send a patch or should I?
<mgz> tych0: go for it
<tych0> mgz: ok, will do, thanks
<tych0> mgz: https://github.com/juju/juju/pull/5349
<mgz> tych0: lgtm
<mgz> tested the change locally as well
<tych0> mgz: cool, thanks
<tych0> mgz: does your LGTM mean that i can $$merge$$? id on't know much about the process here still
<mgz> tych0: yes, you can
<mgz> and yeah, it's not super clear :) (I didn stamp on the review site)
<mgz> hm, though, might be blocking bugs at present
<mgz> lets see
<mgz> tych0: yeah, like a million
<tych0> mgz: ha, ok :0
<tych0> :)
<mup> Bug #1578337 changed: no command to remove controllers <juju-core:Invalid> <https://launchpad.net/bugs/1578337>
<mup> Bug #1578337 opened: no command to remove controllers <juju-core:Invalid> <https://launchpad.net/bugs/1578337>
<dimitern> fwereade: thanks for the review on http://reviews.vapour.ws/r/4734/ btw!
<dimitern> fwereade: if you're still about, I'm pushing some updates, including the missing tests and will also tackle your suggestions
<fwereade> dimitern, cool, thanks
<dimitern> fwereade: I'll ping you in the next 30m or so for a final look, while I finish a few more live tests
<mup> Bug #1578337 changed: no command to remove controllers <juju-core:Invalid> <https://launchpad.net/bugs/1578337>
<natefinch> fwereade: do you have thoughts on what should happen if someone does juju deploy precise/ubuntu --series xenial?  Should it depend on whether or not ubuntu is a multiseries charm or not?  If it's not a multiseries charm, it's pretty clear that this should fail without --force, but if it is a multiseries charm, precise/ubuntu really is the same charm as xenial/ubuntu, so should we let it deploy to xenial without --force, or enforce regularity in
<natefinch> the CLI even though --force is not technically needed?
<natefinch> also welcome anyone else's thoughts on this.  I'm not really sure I know what the right answer is.  I feel like, either way, we're going to annoy someone
<natefinch> I guess, if we're going to annoy people either way, we might as well make it consistent, and require --force.
 * natefinch rubber ducks with the channel as a whole.
<fwereade> natefinch, ha -- it feels to me like the user is explicitly specifying a charm, and explicitly specifying a series; and expecting us to resolve the charm, discover that it's multiseries and xenial is supported, and do what they told us to
<fwereade> natefinch, if it's not multiseries, --force for sure, but I don't think we should need it if they're doing something sane in an unorthodox fashion
<natefinch> fwereade: I just have a feeling that it would feel weird that juju deploy precise/ubuntu --series xenial would work, but juju deploy precise/mysql --series xenial would not
<fwereade> natefinch, I think of "precise/ubuntu" as a charm selector
<natefinch> fwereade: to the user, it's different behavior based on invisible information
<fwereade> natefinch, that's always been the underlying intent: what the user types is input to a specific component that resolves an *actual* charm, and we then proceed purely on the basis of the charm we found: it feels odd to me to attach additional weight to the selector, just to make something fail
<fwereade> natefinch, deploying "precise/ubuntu" to machines with two different series is, I think, a clear way of specifying that you want the exact charm in both places
<fwereade> natefinch, it's not an unreasonable request
<fwereade> natefinch, it will succeed or fail based on the charm in either case
<natefinch> fwereade: right, but, to an end user, sometimes it'll magically work, and sometimes it'll magically require --force
<natefinch> fwereade: I guess that's ok, since working indicates that it's supposed to be ok there, and not working indicates it may well explode
<fwereade> natefinch, I think so, yeah
<fwereade> natefinch, and if they *really* want to try it they can always --force
<natefinch> fwereade: I think that makes sense... it does give the user extra information that "hey this might break" whereas with a multjseries charm we can say "hey, this should work"
<fwereade> natefinch, exactly, yeah
<natefinch> fwereade: cool. thanks for helping me talk it out :)
<fwereade> natefinch, always a pleasure :)
<niedbalski> alexisb, ping
<alexisb> niedbalski, ping
<niedbalski> ericsnow, https://bugs.launchpad.net/juju-core/+bug/1514874 I am seeing this now with 1.25.5, juju is uninstalling after rebooting the units/state.
<mup> Bug #1514874: Invalid entity name or password error, causes Juju to uninstall <juju-core:Triaged> <https://launchpad.net/bugs/1514874>
<niedbalski> ericsnow, I think that even if 'returned during API open: invalid entity name or password' is a valid cause, throwing a ErrTerminateAgent exception is an over-reaction, instead of putting the connect-try in a loop and log a warn/fatal error.
<ericsnow> niedbalski: OTP
<perrito666> niedbalski: I thought we had taken care of that
<niedbalski> perrito666, I thought the same, but that's probably only on 2.0
<perrito666> niedbalski: I am pretty sure we fixed that around 1.21
<perrito666> the last time that bit someone was in 1.18, for reference se PS4
<ericsnow> niedbalski: taking a look
<niedbalski> perrito666, which specific bug?
 * perrito666 clicks what niedbalskilinked to see what exactly are we talking about
<natefinch> fwereade: still around?
<niedbalski> perrito666, well, with 1.25.5 it's quite easy to reproduce; edit the agent.conf of the unit, change the apipassword, and restart. Now it seems that some other condition can be causing this same behavior.
<redir> alexisb: what command you working on flattening currently? block?
<perrito666> niedbalski: that smells strongly like a regression
<alexisb> redir, yep
<alexisb> the others are all yours
<alexisb> and i am happy to meet and discuss if you like
<perrito666> I should know, I have broken agent.conf in very creative ways with restore and juju never uninstalled
<alexisb> katco, ping
<katco> alexisb: pong
<niedbalski> perrito666, I just did it, wiped.
<katco> alexisb: sorry was eating lunch earlier. responded to you email
 * perrito666 scratches chin
<perrito666> niedbalski: is this behavior also present in 2.0?
<alexisb> katco, np, I see the response now thank you
<alexisb> katco, we are going to want to jump on that bug right away
<katco> alexisb: ah, that was 13:00 today?
<redir> alexisb: I'm looking through https://github.com/juju/juju/pull/5240/files to see how you tackled it.
<niedbalski> perrito666, I can check.
<perrito666> niedbalski: if you say yes I am very much about to ring a hughe alarm
<katco> alexisb: kk
<alexisb> redir, once I am done with my upcoming call we should probably chat, I probably can save you some time
<fwereade> natefinch, briefly
<natefinch> fwereade: along the same lines as before juju deploy precise/ubuntu currently deploys to xenial, since the selected charm says xenial is best
<natefinch> fwereade: bug or feature?
<fwereade> anyone who's around, I don't quite have time to look at dimitern's branch but the only thing blocking an LGTM was a lack of unit testing -- I am happy with the code
<fwereade> natefinch, ha
<fwereade> natefinch, that *does* rather seem to break the law of least surprise, doesn't it :-/
<natefinch> fwereade: I actually kind of wish precise/ubuntu wouldn't resolve anymore if ubuntu is a multiseries charm
<fwereade> yeah
<dimitern> fwereade: thanks! I'm doing a final live test after adding concrete errors as discussed (it was surprisingly hard to do outside of the errors package and still get the same effect)
<fwereade> natefinch, the most DWIMmy thing I can think of is to use features of the selector as tie-breakers when the user hasn't otherwise specified
<natefinch> fwereade: that seems fair
<fwereade> natefinch, cool
<natefinch> fwereade: cool, thanks
<fwereade> dimitern, hard to do as single error values?
<natefinch> core team meeting anyone?
<fwereade> natefinch, ...dammit, I can't really :( need to get supper
<dimitern> fwereade: well, that would've been a lot simpler, but I wanted to preserve the error stack and the usual way of tracing/annotating
<natefinch> fwereade: totally understood :)
<fwereade> dimitern, fair enough, but I think once you've understood the situation well enough to pick a specific error the rest of the context is less interesting and can probably be dropped
<fwereade> regardless :)
 * fwereade gtg
<perrito666> so anybody coming to the team meeting?
<redir> alexisb: OK.
<redir> on the team meeting
<dimitern> alexisb, fwereade, also OCRs, etc: http://reviews.vapour.ws/r/4734/ is ready and tested well enough to get approval and land I think, which will fix bug 1321442
<mup> Bug #1321442: Juju does not support EC2 with no default VPC <ec2-provider> <network> <juju-core:In Progress by dimitern> <https://launchpad.net/bugs/1321442>
<natefinch> redir: btw - http://juju.fail/ will give you the state blocked release branches
<alexisb> redir, I am available when you are
<mup> Bug #1501398 changed: Test suite failures with WSARecv timeout <blocker> <centos> <ci> <go1.6> <ppc64el> <test-failure> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1501398>
<mup> Bug #1570883 changed: imageSuite.TestEnsureImageExistsCallbackIncludesSourceURL fails on centos go 1.6 <blocker> <centos> <ci> <go1.6> <jujuqc> <lxd> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1570883>
<mup> Bug #1578254 changed: Race in apiserver/common and apiserver/proxyupdater <blocker> <ci> <race-condition> <regression> <juju-core:Fix Released by hduran-8> <https://launchpad.net/bugs/1578254>
<mgz> natefinch, starving his family
<alexisb> redir, I am going to step out for a bit for lunch, I will ping you when I am back
<redir> alexisb: me too and I'll be back.
<mgz> sinzui: should bug 1571914 and so on actually be blockers?
<mup> Bug #1571914: github.com/juju/juju/cmd/jujud unit tests fail if xenial is the LTS <juju-core:In Progress by reedobrien> <juju-core 1.25:In Progress by reedobrien> <https://launchpad.net/bugs/1571914>
<mgz> we hacked around the failures, but holding up the branch from landing seems perverse
<sinzui> mgz: I will make a exception for the LTS fixes. We are running our of time to land those before my hack fails
<mgz> sinzui: is it really an exception?
<mgz> without the hack, it would be failing tests, which would block
<mgz> so, I think redir should land, and we should remove the hack
<sinzui> mgz: yes, that is what I mean.
<mgz> $$fixes-1571914$$
<niedbalski> ericsnow, any idea?
<ericsnow> niedbalski: just about to add a lengthy comment to the bug
<niedbalski> ericsnow, thanks.
<ericsnow> niedbalski: done; hope that helps
<redir> mgz that didn't work against master
<mgz> redir: let me double check
<mgz> redir: I see, only marked as 'high' against master, fixing now
<mgz> redir: go again
<fwereade> alexisb, http://reviews.vapour.ws/r/4734/ LGTM, none of my issues should block landing
 * fwereade disappears again
<katco> ericsnow: natefinch: we need to chat rq
<natefinch> katco: kk
<katco> natefinch: moonstone
<alexisb> fwereade, awesome thank you
<alexisb> redir, I am back and ready when you are
<redir> k
<redir> alexisb: gimme 5
<redir> sinzui: the s390x /tmp issue is all sorted yes?
<sinzui> redir: yes it is
<redir> tx
<redir> alexisb: there's no cards in leankit for these command updates are there?
<redir> alexisb: what about launchpad?
<alexisb> redir, no
<alexisb> and no
<redir> k
<alexisb> redir, probably worth adding a card to tanzinite board for this
<redir> gx
<redir> tx
<redir> OK will do alexisb
<perrito666> mgz: can you pass me that link for maas images again?
<mgz> perrito666: wiki.cloudbase.it/maas
<perrito666> tx
 * perrito666 kicks his maas until it works
<katco> bogdanteleaga: ping?
<mup> Bug #1578834 opened: update-alternatives fails to switch between juju-1 and juju-2 <juju-core:New> <https://launchpad.net/bugs/1578834>
<perrito666> curiosity, cannot run qemu and vbox at the same time
<perrito666> wallyworld: is 1:1 going to happen?
<wallyworld> perrito666: yeah, finishing a qa meeting, and also just need to ensure another clash won't happen, will ping in a sec
<perrito666> k
<axw> menn0: sorry for piking last night, I hope my notes were at least helpful. did you talk about remote lxd / untrusted users vs. profiles at all?
<menn0> axw: np. your notes were very helpful.
<menn0> axw: we didn't talk about the lxd profiles item much, but the one pager for it is on my plate now.
<menn0> axw: can we perhaps talk about it on monday?
<axw> menn0: yep, no worries
<menn0> axw: i'm a bit pushed for time today
<axw> menn0: np, just wondering if it came up that's all. we can talk later
<axw> (it only occurred to me while I was writing notes :))
<menn0> axw: to be honest, before I saw your notes about it I thought it was only about the lxd provider. I wasn't think about remote lxd containers.
<menn0> axw: one thing which was mentioned last night is the Will was pretty sure placement directives were the right way to go.
<axw> menn0: it may be, but I meant lxd provider targeting a remote host, not lxd containers
<axw> menn0: yeah that's my feeling too
<menn0> axw: I haven't put enough thought into it yet
<redir> mgz: sinzui: FWIW, both the 1.25 and master fixes for the LTS hack have merged. In case you missed 'em.
<mgz> redir: ta
<axw> menn0: you can't *yet
<axw> *
<sinzui> redir: thank you
#juju-dev 2016-05-06
<anastasiamac> is anyone working with tip of master?
<anastasiamac> i cannot complile..
<anastasiamac> mongo/open.go:120: undefined: tls.TLS_RSA_WITH_AES_128_GCM_SHA256
<anastasiamac> mongo/open.go:121: undefined: tls.TLS_RSA_WITH_AES_256_GCM_SHA384
<anastasiamac> (and yes i've ran godeps) :-D
<anastasiamac> any suggestions?
<natefinch> anastasiamac: using go 1.6?
<natefinch> anastasiamac: my guess is you're using an older version that didn't have that defined
<anastasiamac> natefinch: no.. not go1.6 :( i guess i need to move \o/
<natefinch> anastasiamac: we're officially a go 1.6 shop now. Which is nice except for the build times.  If you're feeling brave, use master of the go repo. They keep it really stable and nice, and compile times have dropped by about 25%
<anastasiamac> natefinch: my reservation was that CI is not fully 1.6?...
<anastasiamac> natefinch: u know feeling brave and being brave are a bit different ... m certainly not feeling too brave today :D
<natefinch> anastasiamac: the call was made a day or two ago that we're abandoning go 1.2 entirely in CI.  The only reason we were keeping it around was for windows and I think centos tests that failed more on 1.6... but it was decided that we should just fix the 1.6 tests ,not rely on 1.2 to make those pass.
<natefinch> anastasiamac: which I entirely agree with... given t hat we were shipping 1.6, but only testing on 1.2
<natefinch> (for windows and centos)
<anastasiamac> natefinch: sure.. we still have PRs that require 1.6 and we are holding off landing them coz .. well we are not brave :D
<natefinch> anastasiamac: well, time to be brave, because we're 100% 1.6 now, AFAIK.
<cmars> wallyworld, CQRS? http://martinfowler.com/bliki/CQRS.html
<wallyworld> cmars: yeah, that's it
<cmars> cool
<anastasiamac> axw: r we brave enough ^^^ to land azure PR once master is unblocked?
<axw> anastasiamac: should be fine, I'll probably bump up to the latest SDK before doing so
<axw> anastasiamac: intending to test it again today, and hten base the retry changes on top
<anastasiamac> axw: \o/
<redir> go 2016:)
<redir> g'nite juju-dev
<natefinch> hmm... good to know.. for some reason ssh doesn't like my id_rsa key pair
<mup> Bug #1578898 opened: cmd/juju/commands: bootstrap tests are fetching GUI metadata from streams.canonical.com <juju-core:Triaged> <https://launchpad.net/bugs/1578898>
<mup> Bug #1578906 opened: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1578906>
<dooferlad> voidspace: guess you are the only sapphire person online today so no standup?
<voidspace> dooferlad: I was there briefly
<voidspace> dooferlad: but on my own so I left :-)
<voidspace> dooferlad: where's dimiter?
<dooferlad> voidspace: no idea
<voidspace> ah, holiday
<voidspace> public holidays on two fridays in a row - nice :-)
<dooferlad> voidspace: heh. I could do with a holiday. Or a day off sick.
<dooferlad> voidspace: or just passing out for 24 hours
<voidspace> dooferlad: TGIF
<dooferlad> voidspace: something like that
<voidspace> :-)
<voidspace> nice and sunny here - and hopefully lasting into the weekend
<voidspace> we're planning a trip to bluebell woods
<voidspace> dooferlad: you got any plans?
<dooferlad> probably sleep
<dooferlad> Liz and I both have stinking colds
<voidspace> :-(
<voidspace> hope you get well
<voidspace> *quickly
<dooferlad> thanks :-|
<voidspace> too fast on the old enter key
<rogpeppe> anyone know how to remove a model in state (using the state package API) ?
<rogpeppe> fwereade: I seem to have forgotten how to remove things from the state... ^ :)
<fwereade> rogpeppe, model.Destroy()?
<rogpeppe> fwereade: that sets life to dead but doesn't seem to remove it
<rogpeppe> fwereade: i'm wondering if Cleanup is the thing to use
<fwereade> rogpeppe, the last thing to do with a dead model is RemoveAllModelDocs
<rogpeppe> fwereade: ah, thanks. I think I might've expected that to be named Model.Remove
<fwereade> rogpeppe, yeah, that would indeed be the sane thing to call it. not sure why the implementation details got leaked into the name there
<rogpeppe> fwereade: that works BTW, thanks!
<fwereade> rogpeppe, cool :)
<rogpeppe> here's a fix for juju using excessive numbers of mgo sockets in some cases: http://reviews.vapour.ws/r/4783/)
<mup> Bug #1579002 opened: state: uses too many mgo sockets in loops <juju-core:New> <https://launchpad.net/bugs/1579002>
<mup> Bug #1579002 changed: state: uses too many mgo sockets in loops <juju-core:New> <https://launchpad.net/bugs/1579002>
<mup> Bug #1579002 opened: state: uses too many mgo sockets in loops <juju-core:New> <https://launchpad.net/bugs/1579002>
<mup> Bug #1579010 opened: state: removing model can generate huge transactions <juju-core:New> <https://launchpad.net/bugs/1579010>
<bogdanteleaga> katco, might be too late, but pong
<mup> Bug #1579010 changed: state: removing model can generate huge transactions <juju-core:New> <https://launchpad.net/bugs/1579010>
<mup> Bug #1579010 opened: state: removing model can generate huge transactions <juju-core:New> <https://launchpad.net/bugs/1579010>
<mup> Bug #1579051 opened: Race in juju/controller/destroy and TestDestroyCommandConfirmation <ci> <race-condition> <regression> <juju-core:New> <https://launchpad.net/bugs/1579051>
<mup> Bug #1462966 changed: worker/provisioner: multiple data races <race-condition> <juju-core:Fix Released> <https://launchpad.net/bugs/1462966>
<mup> Bug #1470297 changed: worker/uniter/storage: data race in test <race-condition> <unit-tests> <juju-core:Fix Released> <https://launchpad.net/bugs/1470297>
<mup> Bug #1519183 changed: featuretests: tests fail under -race because of crappy timing issues <2.0-count> <race-condition> <juju-core:Fix Released> <https://launchpad.net/bugs/1519183>
<mup> Bug #1579057 opened: Race in github.com/juju/juju/worker/catacomb/catacomb <blocker> <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1579057>
<mup> Bug #1579059 opened: MainSuite.TestFirstRun2xFrom1x fails on windows <blocker> <ci> <regression> <test-failure> <unit-tests> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1579059>
<mup> Bug #1579062 opened: localHTTPSServerSuite no trusty arm64/ppc64el images <arm64> <blocker> <ci> <ppc64el> <regression> <test-failure> <unit-tests> <juju-core:Triaged> <https://launchpad.net/bugs/1579062>
<mup> Bug #1571914 changed: github.com/juju/juju/cmd/jujud unit tests fail if xenial is the LTS <blocker> <test-failure> <juju-core:Fix Released by reedobrien> <juju-core 1.25:Fix Released by reedobrien> <https://launchpad.net/bugs/1571914>
<mup> Bug #1576021 changed: 1.25.6 cannot deploy on CI maas 1.9 or 1.8 <blocker> <ci> <maas-provider> <regression> <juju-ci-tools:Triaged> <juju-core:Invalid by dooferlad> <juju-core 1.25:Fix Released by dooferlad> <https://launchpad.net/bugs/1576021>
<mup> Bug #1576368 changed: blockdevice 2.0 schema check failed: model: expected string, got nothing <blocker> <ci> <deploy> <maas-provider> <juju-core:Fix Released by 2-xtian> <https://launchpad.net/bugs/1576368>
<katco> fwereade: hey, having trouble figuring out how to implement a timeout with a worker (i.e. worker.Wait(), but also continue after a certain amount of time). is there any prior art?
<alexisb> katco, happy friday!
<katco> alexisb: it is friday
<alexisb> katco, not urgent but when you have a chance we need to add this bug on the bug squad board w/ blocker tag: https://bugs.launchpad.net/juju-core/+bug/1578906
<mup> Bug #1578906: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-core:Triaged> <juju-release-tools:Triaged> <https://launchpad.net/bugs/1578906>
<natefinch> katco: at least no one will ask you to get up at 5:30 tomorrow (probably) :D
<katco> alexisb: i'll do that now
<fwereade> katco, sorry, restate please?
<katco> fwereade: if i write a worker, and i only want to continue contingent on waiting on the worker or a timeout, can you point me to any prior art?
<katco> fwereade: i want to do something like this: w := NewFooWorker; select { case <-w.Wait: case <- time.After(5*time.Minute)}
<fwereade> katco, ah ok -- I think I'd just do something like http://paste.ubuntu.com/16260449/ -- and then Wait for the worker which I'd trust to notify me if it hadn't really finished its job
<katco> fwereade: (headed to meeting) where does the abort channel come from? that's what i can't figure out how to safely get out of a worker
<fwereade> katco, the abort channel is the abort channel for the agent, or whatever it is, that's running this local logic
<fwereade> katco, (leaving a goroutine leaked to kill an already-dead worker in the future, if the process survives that long, is not really a big deal but it's untidy ;p)
<fwereade> katco, (and it makes it hard to move the logic around safely too, I think)
<fwereade> katco, sane?
<katco> ericsnow: standup time
<katco> fwereade: will digest in a bit, sorry
<fwereade> katco, historical interlude: Dead() and Err() are the methods you might have to use on, say, an old-style watcher that mixes lifetime and notification concerns
<fwereade> katco, np :)
<fwereade> katco, if you *need* a Dead chan, you could build one like this: http://paste.ubuntu.com/16260672/
<alexisb> fwereade, have you seen this: https://bugs.launchpad.net/juju-core/+bug/1579057
<mup> Bug #1579057: Race in github.com/juju/juju/worker/catacomb/catacomb <blocker> <ci> <race-condition> <regression> <juju-core:Triaged> <https://launchpad.net/bugs/1579057>
<fwereade> katco, I think there's something we haven't quite figured out re: workers that should run indefinitely, vs workers that exist for a specific task: *most* of our workers are indefinite, and Kill->no-error makes sense there, narrowly; but for workers that are *expected* to complete of their own accord, ErrAborted makes sense
<fwereade> alexisb, looking
<fwereade> alexisb, looks like it's in the uniter remotestate stuff? can take a look shortly though
<alexisb> fwereade, that would be awesome, thank you
<alexisb> katco, I added the remaining critical blockers for beta7 on the bug squad board
<alexisb> katco, if people are looking for something to do they are there :)
<katco> alexisb: ta... ericsnow picked up the rackspace one
<alexisb> I saw that :)
<alexisb> and fwereade picked up one and dimitern picked up another
<katco> natefinch: redir: more blocking bugs are on the board for when you're ready
<alexisb> so just 2 left
<katco> alexisb: ... for now (dun dun dun!)
<alexisb> natefinch, katco: if nate could finish up the manual provider bug (including making sure 2.0 works) that would be awsome, and priority
<katco> alexisb: he has a PR up against the upstream project
<katco> alexisb: but i think it's contingent on them accepting it, or us vendoring that project and carrying the patch
<natefinch> yep.. I'm writing a test for it now
<alexisb> ah I see
<alexisb> ok, sorry missed that piece
<alexisb> natefinch, please put test results in the bug
<katco> fwereade: ok, digested your comments. so manage the timeout outside the worker and kill when it's timed-out?
<fwereade> katco, I think so, yeah
<fwereade> katco, and I think I've realised something about abort chans -- *workers* don't actually need them, because you can always ust Kill enternally
<fwereade> katco, it's only long-running *funcs* that need abort chans
<fwereade> katco, I think :)
<katco> fwereade: hm. i'd have to use your pastebin to expose a channel that signals when the worker is dead?
<fwereade> katco, you can just Wait, can't you?
<katco> fwereade: for reboot? no, as written, the reboot will go ahead after a timeout, even if we're still seeing containers up
<fwereade> katco, start the worker going, start the timeout-killer, wait for the worker to stop one way or the other
<katco> fwereade: oh, so pass an abort into the worker that comes from the timeout-killer
<fwereade> katco, it doesn't even need that, does it? the interaction via Kill() is all we need
<katco> fwereade: so something like: go func() { time.Sleep(5*time.Minute); worker.Kill() }?
<fwereade> katco, (and if you want it to be stopped when the enclosing context stops, manage that via catacomb and you get stop-when-parent-stopped for free)
<fwereade> katco, yeah
<fwereade> katco, that's the leaked goroutine I don't *really* care about -- the abort chan in the first pastebin would be to clean that up when the worker stopped
<mup> Bug #1579127 opened: Cannot deploy windows nano <blocker> <deploy> <windows> <juju-core:Triaged> <https://launchpad.net/bugs/1579127>
<fwereade> https://github.com/juju/juju/pull/5356 hasn't bee picked up by RB for some reason, but fixes lp:1579057
<fwereade> cmars, you free to review^^?
<fwereade> alexisb, sinzui: ^^
<cmars> fwereade, sure, looking
<cmars> fwereade, LGTM
<fwereade> cmars, cheers
<fwereade> alexisb, https://bugs.launchpad.net/juju-core/+bug/1579057 $$merge$$ing
<mup> Bug #1579057: Race in github.com/juju/juju/worker/catacomb/catacomb <blocker> <ci> <race-condition> <regression> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1579057>
<alexisb> thanks!
<mgz> ericsnow: I belive I have fixed rackspace
<ericsnow> mgz: sweet
<perrito666> well lxd hates me today
<mgz> just today?
<rick_h_> mgz: yay, does it need the next beta to work?
<rick_h_> mgz: or trunk I should say?
<mgz> rick_h_: no, it seems like part of our image streams got deleted from the account
<rick_h_> mgz: ok cool ty much!
<mgz> I regenerated using the script from last time, and it passed the deploy test
<mgz> rick_h_: I have a couple of (somewhat related) things to bug you about if you have a sec
<mgz> rick_h_: we have a couple of failures in CI due to out of date MAAS images, which nominally cpc managed (but I think smoser knows most about)
<rick_h_> mgz: ok
<mgz> rick_h_: bug 1568895 and bug 1576873
<mup> Bug #1568895: Cannot add MAAS-based LXD containers in 2.0beta4 on trusty <ci> <jujuqa> <lxd> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1568895>
<mup> Bug #1576873: Juju2 cannot deploy centos workloads on maas 1.9 <blocker> <centos> <ci> <maas-provider> <regression> <cloud-images:New> <juju-core:Won't Fix by natefinch> <https://launchpad.net/bugs/1576873>
<mgz> so, if you have pat or someone on hand to kick that would be ace, sinzui is also going to email and beg
 * rick_h_ is looking
<mgz> rick_h_: subject #2, just making sure you've mentioned to john grimm this week that we really want someone on the server team who'll respond to packaging review requests etc from us
<mup> Bug #1579148 opened: dhclient needs reconfiguring after bridge set up <network> <juju-core:Triaged> <https://launchpad.net/bugs/1579148>
<mup> Bug #1578906 changed: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-ci-tools:In Progress by gz> <juju-core:Invalid by ericsnowcurrently> <https://launchpad.net/bugs/1578906>
<mup> Bug #1578906 opened: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-ci-tools:In Progress by gz> <juju-core:Invalid by ericsnowcurrently> <https://launchpad.net/bugs/1578906>
<sinzui> mgz: rick_h_ : I am returning to the maas email now. I had to rescuse a child from school.
<mgz> sinzui: had good news, rick said both bugs are being worked on
<sinzui> the maas issue?
<mup> Bug #1578906 changed: Rackspace no longer works with Juju <blocker> <ci> <rackspace-provider> <regression> <juju-ci-tools:In Progress by gz> <juju-core:Invalid by ericsnowcurrently> <https://launchpad.net/bugs/1578906>
<mup> Bug #1579173 opened: wily onfigSuite.TestNewModelConfig test failure lxd <blocker> <ci> <lxd> <regression> <test-failure> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1579173>
<mup> Bug #1579173 changed: wily onfigSuite.TestNewModelConfig test failure lxd <blocker> <ci> <lxd> <regression> <test-failure> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1579173>
<rick_h_> sinzui: mgz yes, last report was images with backports were in biulding/testing
<sinzui> \o/
<sinzui> thank you rick_h_
<mup> Bug #1579173 opened: wily onfigSuite.TestNewModelConfig test failure lxd <blocker> <ci> <lxd> <regression> <test-failure> <unit-tests> <wily> <juju-core:Triaged> <https://launchpad.net/bugs/1579173>
<redir> sinzui: is there an arm64/ppc64el system I can test on ?
<sinzui> redir: sure. I think you need a xenial host
<redir> sinzui: I don't think I understand what you mean.
<sinzui> redir: I think the arm64 host is best, but the xenial ppc64el host is idle. Both hosts are on very stricited networks. I will need to pass you some ssh config to get to them
<redir> sinzui: OK let me know.
<sinzui> redir: I think you are working on https://bugs.launchpad.net/juju-core/+bug/1579062 which most often happens on xenial
<mup> Bug #1579062: localHTTPSServerSuite no trusty arm64/ppc64el images <arm64> <blocker> <ci> <ppc64el> <regression> <test-failure> <unit-tests> <juju-core:In Progress by reedobrien> <https://launchpad.net/bugs/1579062>
<redir> sinzui: correct
<sinzui> okay, I will get yoiu acces to both ppc64el and arm64. You can work on them as you need
 * redir goes to make tea
<redir> sinzui: tx
<sinzui> redir: check you email
<redir> sinzui: thanks. Did you add my key to the jump-host?
<sinzui> redir: I didn't...I thought I had done that for the s390x host. I will add it since your question implies I need to
<redir> sinzui: cool. tx.
<sinzui> try it now redir
<redir> sinzui: viola. merci
<redir> sinzui: is this also expected to be an issue on 1.x?
<sinzui> redir: no, the failure wasn't see on 1.25 when it was tested yesterday
<redir> OK. the provider/openstack tests pass on both platforms.
<redir> sinzui:^ I'll refrain running the full suite since the hosts seem pretty oversubscribed.
<sinzui> redir: the arm64-slave host should be idle for about 60-90 minutes
<redir> review anyone? https://github.com/juju/juju/pull/5358
<redir> RB hasn't picked it up yet
<redir> manually created one http://reviews.vapour.ws/r/4785/
<redir> Bueller?
<natefinch> redir, ericsnow: No space left on device: '/tmp/reviewboard.UipOWm'
<natefinch>     
<ericsnow> natefinch: gah
<redir> whoops
<ericsnow> natefinch: time to add a cron job :)
<natefinch> ericsnow: cron job to delete garbage in tmp?
 * natefinch hi5's eric
 * ericsnow hi5's natefinch
<redir> whelp, now we know why it wasn't picking up new PRs
<redir> there's the actual PR https://github.com/juju/juju/pull/5358
<redir> not a lot of there, there.
<natefinch> oh man, json's policy of not allowing commas on the last item of a list is horrible
<natefinch> redir: it's just an addition to somethnig in export_test?
<redir> natefinch: yep, so that tests on arm64 and ppc64el can find appropriate images in test runs
<natefinch> ahh, wacky
<redir> probably something I should have done in the lts updates, but...I didn't know.
<natefinch> LGTM
<redir> well tools
<redir> we by default build amd64 tools for supported series and then whatever the host arch is.
<redir> but there were not images for in the exported index for those host arches
<redir> natefinch: tx.
<redir> $$merge$$ing
<redir> ran the failing tests on the respective arches and it passed on both.
<natefinch> cool
 * redir steps out for a bite
<redir> now redir really steps out.
<redir> what is 08d:|?
<katco> ericsnow: whoops meant to ping you here
<katco> ericsnow: hm. review board not picking up my latest commit to a PR. suggestions?
<redir> katco: seems it is  out of disk
<katco> redir: how are you discovering this?
<redir> I made a post manually from rbtools
<katco> redir: ah
<redir> then http://reviews.vapour.ws/r/4785/diff/#
<redir> well nate noted it above when he tried to review
<katco> redir: ah
<ericsnow> katco: looking into it
<katco> ericsnow: ta
<ericsnow> katco: having trouble connecting to the juju environment
<redir> must be friday
<redir> The last blocker is windows.
#juju-dev 2016-05-07
<redir> anyone still around?
<redir> http://reviews.vapour.ws/r/4787/
 * redir goes EoW RSN (tm)
<redir> have a great weekend juju-dev
<mup> Bug #1553347 changed: juju bootstrap currently taking longer than 30mins <maas> <juju-core:Expired> <https://launchpad.net/bugs/1553347>
#juju-dev 2016-05-08
<ejat> 2016-05-08 04:15:09 INFO install However the following packages replace it:
<ejat> 2016-05-08 04:15:09 INFO install   mysql-testsuite-5.7 mysql-server-core-5.7 mysql-client-5.7
<ejat> 2016-05-08 04:15:09 INFO install   mariadb-server-core-10.0 mariadb-client-10.0
<ejat> 2016-05-08 04:15:09 INFO install
<ejat> 2016-05-08 04:15:09 INFO install E: Package 'mysql-client-5.6' has no installation candidate
<ejat> 2016-05-08 04:15:09 INFO install Traceback (most recent call last):
<mup> Bug #1579051 changed: Race in juju/controller/destroy and TestDestroyCommandConfirmation <blocker> <ci> <race-condition> <regression> <juju-core:Invalid> <https://launchpad.net/bugs/1579051>
<jackweirdy> Is there any documentation on how to create a docker provider?
#juju-dev 2017-05-01
<axw> babbageclunk: would you mind reviewing a couple of small testing PRs? https://github.com/juju/testing/pull/125 and https://github.com/juju/testing/pull/126
<babbageclunk> axw: sure
<babbageclunk> axw: approved!
<axw> babbageclunk: ta
<axw> babbageclunk: and when you have another moment, here's the juju/juju one: https://github.com/juju/juju/pull/7297
<axw> should make landings faster and more reliable
<babbageclunk> axw: done.
<axw> babbageclunk: cheers
<veebers> axw nice!
<axw> veebers: :)
<axw> gotta sort out a failure in the PR it seems, will do after I'm finished reviewing stuff
<axw> veebers: any idea what's going on with http://juju-ci.vapour.ws:8080/job/github-merge-juju/10793/?
<axw> "Build has been executing for 2 days 13 hr on juju-core-slave-b"
 * axw cancels
<axw> veebers: might have something to do with the error on https://github.com/juju/juju/pull/7295 ... :)
 * veebers looks
<veebers> axw: ah good catch, thanks for that :-) That's something that needs to be followed up.
<axw> veebers: http://juju-ci.vapour.ws:8080/job/github-merge-juju/10794/console looks like it might be doing the same thing
<axw> check_blockers hsouldn't take long, right?
<veebers> axw: aye just checking. Looks like it's using old creds, updating now
<axw> cool
<veebers> axw: ok, I've updated the job, aborted that previous run and the next run started and has gotten further
<axw> veebers: cool, thanks
<veebers> axw: Thanks for bringing that to my attention :-)
<axw> np
<veebers> axw: oh actually , should probably abort all those other jobs until the version one passse
<axw> veebers: why's that?
<axw> veebers: oh, so they come after
<axw> yeah
<veebers> axw: sorry was busy cancelling jobs. Yeah so they come after :-)
<veebers> axw: I've merge commented on that PR but I'm not 100% confident my account will trigger it :-\
<veebers> easy fix, just force rebuild that job :-P
<axw> veebers: I've commented too
<axw> either way
<axw> bot is having a little sleep, hasn't picked up my comment either
<axw> the gui one snuck in again
<axw> canceled it again
<axw> or I was looking at an out of date tab ...
<veebers> axw: hah it's possible you aborted the version update job :-)
<axw> veebers: it's still running
<axw> ph
<axw> oh
<axw> maybe I did
<axw> sigh
<veebers> :-)
 * axw backs away from the jenkins
<mup> Bug #1662272 changed: Agents stop running hooks and are hung <canonical-is> <juju-core:Won't Fix> <https://launchpad.net/bugs/1662272>
<mup> Bug #1662272 opened: Agents stop running hooks and are hung <canonical-is> <juju-core:Won't Fix> <https://launchpad.net/bugs/1662272>
<mup> Bug #1662272 changed: Agents stop running hooks and are hung <canonical-is> <juju-core:Won't Fix> <https://launchpad.net/bugs/1662272>
#juju-dev 2017-05-02
 * babbageclunk goes out for a walk.
<babbageclunk> axw: thanks for the comments - can you expand on your first one a bit though? I think that's something I didn't know about.
<axw> babbageclunk: the private-address relation setting is only ever set when a unit enters scope
<axw> babbageclunk: and juju will not attempt to replace the value afterwards
<axw> babbageclunk: because juju can't tell if it set the value, or the charm did
<babbageclunk> axw: And after that the charm can monkey with it if it wants?
<axw> babbageclunk: right, e.g. for "proxy charms", which report the address of some other thing on a different machine
<axw> babbageclunk: so, it's up to the charm to update the private-address relation setting if it sees the addresses change
<babbageclunk> axw: So I think that makes the existing approach where we put in the public address in invalid, doesn't it?
<axw> babbageclunk: well, it'll work up until the point where the public address changes
<axw> babbageclunk: but yes, I think we're buggered after that
<axw> babbageclunk: I think we need network-get for this to work
<babbageclunk> axw: or if the charm updates private-address itself without knowing that it's in a cross-model situation
<axw> babbageclunk: actually, network-get already exists and is what should be used
<axw> babbageclunk: so we just need to make sure network-get does the right thing for remote relations
<axw> i.e. uses the remote space & endpoint binding info to determine which address to report
<babbageclunk> axw: ok
<axw> babbageclunk: juju help-tool network-get, if you're not familiar with it
<babbageclunk> axw: I got confused reading Ian's comments - it seems like the two of them are at odds to me.
<babbageclunk> In the first one he's suggesting to get the information at the point it's needed, and in the second he agrees that we should store the spaces and bindings in the model.
<axw> babbageclunk: indeed, I'm a bit lost there too. IMO we just store the remote spaces and remote application endpoint bindings when creating the remote application entity in state. then we can determine the address to use by querying state, rather than the remote side again
<babbageclunk> axw: Yeah, that makes sense to me.
<axw> babbageclunk: so network-get would call through to the controller, which would check the routability from that existing info, and either return the cloud-local or the public address
<axw> and similar when doing EnterScope
<babbageclunk> axw: Yup yup.
<babbageclunk> axw: Ok, I'll write that up as I understand it and keep going forward with that.
<babbageclunk> axw: Thanks!
<axw> babbageclunk: cool, no worries
<wpk> https://github.com/juju/utils/pull/278 anyone
<wpk> ?
<rogpeppe> axw: ping
<axw> rogpeppe: pong
<rogpeppe> axw: hiya
<rogpeppe> axw: just wondering what the best thing to do about fixing the model-defaults command - it makes an API connection at Init time, which isn't allowed any more (and was never a great idea anyway IMO)
<rogpeppe> axw: the idea that you can't tell if you've provided a cloud argument without going to the API and looking it up seems odd
<rogpeppe> axw: perhaps i should just move all that logic to Run, but my "this is not right" head is getting in the way here :)
 * axw looks
<axw> rogpeppe: that command is pretty gross, but I don't think we can break the ambiguity now
<axw> rogpeppe: I think the best we can do really is to move it to Run
<rogpeppe> axw: ok, will do :-\
<axw> rogpeppe: since the cloud names known to the client aren't necessarily the same as what's in the controller
<rogpeppe> axw: indeed. and neither are the model keys
<axw> yep
<rogpeppe> axw: it would be a bit of a problem if someone made a cloud with a name that matched a model config key...
<rogpeppe> axw: thanks for your review of my rather large PR, BTW
<axw> rogpeppe: no worries
<rogpeppe> wpk: reviewed https://github.com/juju/utils/pull/278
<wpk> rogpeppe: thanks, updated
<blahdeblah> kwmonroe, cory_fu: Either of you around at the moment?  (Or anyone else who knows about cwr-ci.)
<blahdeblah> Just wondering if you can clarify one point from the cwr-ci README: how must we refer to the charm to be tested in the bundle?
<blahdeblah> The doc says to run "juju run-action cwr/0 cwr-charm-commit repo=http://github.com/myself/my-awesome-charm charm-name=awesome-charm reference-bundle=~awesome-team/awesome-bundle"
<blahdeblah> However if I "charm proof" a bundle with anything but a pinned awesome-charm revision, I get a warning.
<blahdeblah> So the question is: how do I reference the charm within the testing bundle?
<blahdeblah> Or maybe it doesn't matter?  Can I just deploy the bundle then CWR will deploy the bundle, then upgrade to the git version of my charm?
<blahdeblah> Also, I've just (apparently) successfully pushed, granted, and published the bundle I want to start with, but it's not showing up in the store: http://pastebin.ubuntu.com/24498131/  Am I doing something wrong?
<rick_h> blahdeblah: yea, I'd not worry about the warning about the version number on the charm
<rick_h> blahdeblah: it's a warning that the charm could move under you and that bundle might be broken if you don't realize it, but it's just a warning
<blahdeblah> rick_h: thanks - any idea about how to a) make the bundle visible, and b) refer to the charm to be tested by cwr-ci within the bundle?
<rick_h> blahdeblah: so to make the bundle visible you have to grant EVERYONE permission to it with the charm cli tool.
<rick_h> blahdeblah: as far as cwr-ci part I'm not sure.
<blahdeblah> rick_h: You mean EVERYONE in caps?  Or everyone just like the pastebin above shows?
<rick_h> blahdeblah: sorry looking. I missed the pastebin
<rick_h> blahdeblah: try the grant command w/o the revision on the url
<rick_h> blahdeblah: and try a `charm show cs:~paulgear/bundle/ntp-server perm`
<rick_h> blahdeblah: to check the permissions
<blahdeblah> rick_h: thanks - that did the trick
#juju-dev 2017-05-03
<axw> hml: if you're going to update goose again any time soon, can you please remove the StatusBuildSpawning constant from nova?
<axw> assuming we don't reference it anywhere else
<hml> axw: sure - will add it to the list of clean up for goose.v2
<axw> hml: thanks
<veebers> axw, hml: Is this related to https://github.com/juju/juju/pull/7300 at all? (please note that I'm investigating this job as there has been a bunch of unexpected failures with it)
<axw> veebers: yes
<hml> veebers: i tried running the tests before the merge, it was a moving target with results over 4 tries, i was a little surprised the merge went thru.
<veebers> hml: which merge was that?
<hml> veebers: for PR 7300
<veebers> hml: it didn't go through, that failed (unless I'm missing something?)
<hml> veebers: i got confused.  i was thinking of a different one.
<veebers> hml: ack
<hml> veebers: there is a merge conflict with dependences.tsv causing the issuesâ¦ with pr7300 it looks like
<hml> veebers, i need to straighten out the goose dependency before i do the merge
<hml> pr 7289 was getting the ci merge failures.
<veebers> hml: ack, I've just changed what machine the merges run on which seems to have helped. I need to have a look and see if that other machine is sick or so
<hml> veebers: ok, i fixed the merge conflict on 7300, not ready to merge yet though
<veebers> hml: ack
<veebers> axw, hml: Please alert me to any check/merge job failures that seem unwarranted. I'll be keeping an eye, but let me know if I miss anything :-)
<axw> veebers: will do
<hml> veebers: sure
<rogpeppe> axw: here's a little cleanup of the migrate command to make it use the ModelCommandBase: https://github.com/juju/juju/pull/7302
<wpk> axw: could you comment on PR7204 about the bridge_ports issue? Is it a problem or not?
<mup> Bug #1688028 opened: Juju fails to request deploy after busy node returns OK <juju-core:New> <https://launchpad.net/bugs/1688028>
<mup> Bug #1688028 changed: Juju fails to request deploy after busy node returns OK <cdo-qa> <cdo-qa-blocker> <juju:New> <juju-core:Invalid> <https://launchpad.net/bugs/1688028>
<babbageclunk> gah, does anyone else have trouble remembering which checkers are in gc and which are in jc? I'm tempted to refer to the gc ones in jc so it's a superset and then just use them from there all the time.
#juju-dev 2017-05-04
<blahdeblah> Anyone able to advise on the proposed release date for juju 2.2?
<cmars> blahdeblah, toward the end of may
<cmars> blahdeblah, toward the end of may
<blahdeblah> cmars: thanks
<mup> Bug #1688177 opened: juju show-action-status needs more details <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1688177>
<axw> wpk: did you delete your comment about keeping HardwareId? I can't see it in the web UI, but I got an email
<axw> wpk: anyway, there's no point in filling in HardwareId if it's a WWN path. HardwareId should only be populated if it's (e.g.) ata-KINGSTON_SV300S37A240G_50026B724501D2D2. if it's wwn-... it'll never match anything
<wpk> axw: I've re-added it as a part of review.
<wpk> axw: why it'll never match anything?
<axw> wpk: because the worker that publishes block device info will never put /dev/disk/by-id/www-... in the BlockDevice.HardwareId field
<wpk> axw: ok, just wanted to make sure, thanks
<rogpeppe> axw: do you know what "ExternalControl" means when migrating, by any chance?
<axw> rogpeppe: nope, sorry
<rogpeppe> axw: ok, ta
<ashipika> hey guys.. who is the core ocr today?
<tasdomas> jam, ping?
<tasdomas> cmars, ping?
<cmars> tasdomas, pong
<cmars> tasdomas, how's it going?
<tasdomas> cmars, hey
<tasdomas> ashipika, ^
<cmars> tasdomas, i think i've resolved ashipika's issue :D
<cmars> but really we have a slight CI UX issue
<ashipika> indeed we have
<tasdomas> cmars, ah, good
<ashipika> slight* :)
<tasdomas> ;-]]
<cmars> although, and maybe this is bad, i usually just $$merge$$ if it passes all the tests locally
<cmars> i do run them
<cmars> promise :)
<tasdomas> a guillotine is just a barber's chair with poor ux ;-]
<cmars> LOL
<cmars> that's awesome tasdomas
<ashipika> tasdomas: i'd panic("arrrggh") in that chair :)
<wpk> while setting up on on AWS with --constraints spaces=... I'm getting machines stuck in:
<wpk> 6        pending                 pending              xenial                 Setting up groups
<wpk> Anyone seen something like that?
<rick_h> wpk: how many security groups do you have and did you hit a cap on them?
<rick_h> wpk: might try "juju show-machine 6" and see if anything's there and then have to hit the logs
<wpk> rick_h:
<wpk>     machine-status:
<wpk>       current: allocating
<wpk>       message: Setting up groups
<wpk> it only hangs if I use contraints spaces,without this option I can add-machine just fine
<rick_h> wpk: yea, so coming from https://github.com/juju/juju/blob/abf46a71316cd64e67db4f5878f9638de82e2e79/provider/ec2/environ.go#L499 I'd think you hit something with security groups there in AWS.
<rick_h> wpk: have to poke at the AWS console and see what's up? Or maybe it's asking for osmething that AWS is erroring with that is not getting caught
<wpk> aha2017-05-04 17:02:15 WARNING juju.provisioner provisioner_task.go:739 failed to start instance (cannot set up groups: fetching security group "juju-edf... (in VPC "vpc-01...."): Tags could not be retrieved. (Unavailable)), retrying in 10s (3 more attempts)
<rick_h> wpk: hmm, so did it create the group and not able to find it over the api? Or is something else causing the connectivity issue between the aws api/controller?
<wpk> I can see this group in console
<wpk> rick_h nothing to do with security groups, all to do with spaces - if a space is not available for a certain AZ/VPC combination the result is as above
<rick_h> wpk: well that's less than helpful heh
<axw> babbageclunk: standup?
<babbageclunk> axw: D'oh sorry - omw
#juju-dev 2017-05-05
<babbageclunk> axw: could you review https://github.com/juju/description/pull/12 please? Part of adding remote spaces and subnets to state.
<babbageclunk> axw: annoyingly the biggest part.
<axw> babbageclunk: just having lunch atm, will review soon after
<babbageclunk> axw: cool cool - no rush
<axw> babbageclunk: I think we should have used protobuf or something similar for this :/
<axw> which I think you were getting at this morning
<babbageclunk> axw: yeah, agreed - something that supports versioning - it's a lot of fiddly nearly-boilerplate.
<axw> babbageclunk: done, sorry for the delay
<gsamfira> heya
<gsamfira> anyone have time for a quick review? :)
#juju-dev 2017-05-07
<babbageclunk> axw: ping?
<babbageclunk> axw: anyway, I think you're right about reusing subnet in description instead of adding remote subnet - I'm going to make a V3 of subnet to make them work for this and get rid of description/RemoteSubnet.
#juju-dev 2018-04-30
<hml> externalreality: i was curiousâ¦ https://paste.ubuntu.com/p/zq8RmvJJs6/  restore-backup -b appears to have worked with openstack.
<hml> rick_h_: options on flag and command names please?  https://docs.google.com/document/d/1UtJaKnWxKFXoH1JfL7Q_ARaTxTo63QpdS-WO4VILF4M/edit#
<rick_h_> hml: can I get comment access please?
<hml> rick_h_: sure - thought i had those darn permissions corredt
<rick_h_> hml: one question is if there's other db space items like that (charms cached and such?)
<rick_h_> Resources
<hml> rick_h_: i have to look, there are a few databases, iâm not sure of all
<hml> rick_h_: are you pondering if this should be a more generic command?
<rick_h_> hml: my first instinct is that this should be a generic thing vs tied to remove-db specificly. Maybe the flag makes sense as a single command to do two steps but maybe this is wider than being backup specific
<mup> Bug #1768064 opened: Juju messes up when terminating AWS instances <juju-core:New> <https://launchpad.net/bugs/1768064>
<hml> rick_h_: tooks a look at the dbs, those items are not split out from each other afaik
<hml> rick_h_: iâm not sure where else we allow users to delete something like this
<rick_h_> hml: what do you mean "not split out from each other"?
<hml> rick_h_: they are all in the same databaseâ¦ resources and the like - though charms may be in the blobstore like backups are
<rick_h_> hml: got a sec to chat?
<hml> rick_h_: sure
<rick_h_> hml: pm'd
<hml> rick_h_: omw
<hml> balloons: ??
<balloons> hml, I saw that you had cancelled?
<hml> balloons: I meant to cancel next week
<balloons> ahh
<hml> balloons:  thought thatâs what i ddid
<balloons> you did :-)
<balloons> hml, so we can chat now if you'd like :-)
<hml> balloons: sure
#juju-dev 2018-05-01
<externalreality> hml, the `restore-backup` -b option is failing for lxd >= 2.3 providers since it tries to read configuration /etc/default/lxd-bridge.
<externalreality> apparently it should try to get the configuration from the api instead when using said versions since the file may not exist.
#juju-dev 2018-05-02
<hml> good morning o/
<hml> externalreality_: ping
<hml> code review please?  https://github.com/juju/juju/pull/8663
<hml> externalreality_, balloons ^^
<externalreality_> hml, will look
<hml> externalreality_: thanks!
#juju-dev 2018-05-03
<admcleod_> could someone please point me to the code where private-address is set? i have had a discussion with stub about this not being the best way to get the private ip due to some bugs, but im very confused because its in a test environment that had previously been working, so its quite possible its something i changed.
<veebers> bab
<veebers> huh, how did that happen :-
<hml> good morning
<hml> balloons: do we support the case of old client new controller?
<balloons> hml, that can get a little hairy
<hml> balloons: ack -
<balloons> hml, but in general, it's certainly possible for there to be a little skew
<balloons> I think the skew typically happens the other way though.. older controller, newer client
<balloons> most folks seems to try and match them
<admcleod_> hi guys, any movement on this: https://bugs.launchpad.net/juju/+bug/1765571 ?
<mup> Bug #1765571: lxd container fails to launch on bionic host: No associated target operation <juju:Triaged by manadart> <https://launchpad.net/bugs/1765571>
<hml> externalreality_: Iâm pushing up the changes based on your comments to https://github.com/juju/juju/pull/8663, please review.  working on the related test changes now
#juju-dev 2018-05-04
<manadart> admcleod_: I am on leave, back Monday. I will tackle https://bugs.launchpad.net/juju/+bug/1765571 next week.
<mup> Bug #1765571: lxd container fails to launch on bionic host: No associated target operation <juju:Triaged by manadart> <https://launchpad.net/bugs/1765571>
<hml> gâmorning
<balloons> good morning all
#juju-dev 2020-05-01
<mup> Bug #1876351 opened: bootstrap failed with Tools checksum mismatch <juju-core:New> <https://launchpad.net/bugs/1876351>
<mup> Bug #1876351 changed: bootstrap failed with Tools checksum mismatch <juju-core:New> <https://launchpad.net/bugs/1876351>
<mup> Bug #1876351 opened: bootstrap failed with Tools checksum mismatch <juju-core:New> <https://launchpad.net/bugs/1876351>
