[02:59] <wallyworld__> thumper: yo
[03:12] <thumper> wallyworld__: hey
[03:12] <thumper> wallyworld__: how's it going today?
[03:12] <wallyworld__> thumper: notice how i didn't ping :-)
[03:12] <wallyworld__> going ok, deep into some refactoring
[03:12] <thumper> wallyworld__: as you may have seen, I've put the kvm broker review up
[03:12] <thumper> wallyworld__: yeah...
[03:12] <wallyworld__> thumper: funny that, i have a question
[03:12] <thumper> wallyworld__: I'm now getting containers to return hardware characteristics
[03:13] <wallyworld__> quick hangout?
[03:13] <wallyworld__> https://plus.google.com/hangouts/_/76cpj9mtcok4cua6di82i0o3ms?hl=en
[03:14]  * thumper joins
[04:39] <thumper> wallyworld__: https://codereview.appspot.com/36980043/
[04:40] <wallyworld__> \o/
[04:41] <thumper> wallyworld__: it's a start
[04:41] <wallyworld__> yes
[04:45] <thumper> thanks for the review
[04:45] <thumper> I'll tweak and land tomorrow morning
[04:45] <thumper> night all
[05:28] <axw> gtg to my daughter's school orientation, bbl
[06:43] <axw> back
[08:14] <rogpeppe> mornin' all
[08:16] <axw> morning
[08:24] <jam> smoser: there was a discussion about allowing charms to add constraints (like mem, etc), I could see that being extended to support stuff like "!lxc". However, that probably isn't on the roadmap for this cycle.
[09:18] <axw> jam: I forget, do we need to support old CLI with new server?
[09:18] <axw> jam: just wondering if I can remove secrets pushing API
[09:19] <axw> server-side
[09:19] <jam> axw: the old CLI didn't push secrets via PI
[09:19] <jam> API
[09:19] <jam> so yes, but we don't have to keep that bit
[09:19] <jam> axw: we have to allow the old CLI direct DB access
[09:19] <axw> jam: ah, it's only on trunk isn't it?
[09:19] <axw> no.. I broke the last release
[09:20] <jam> axw: ? you broke the new CLI connecting to the old server
[09:20] <jam> IIRC
[09:20] <jam> but we can drop that bit
[09:20] <jam> right now we are trying and if it fails just continuing
[09:20] <axw> ok
[09:20] <jam> with synchronous bootstrap, we don't even have to try anymore :)
[09:20] <axw> jam: well, we should still push secrets for old servers, right?
[09:21] <axw> or do we not care?
[09:21] <axw> assume they're already set up?
[09:21] <jam> axw: even if there was a 1.17.0 that had async bootstrap and we were pushing via the API as a dev release wedon't have to push secrets to it in 1.17.1
[09:21] <jam> axw: we don't care if it is only a dev release
[09:21] <jam> thats why we call them *dev*
[09:21] <axw> jam: sorry, I mean, can we drop the code that pushes secrets to existing installations
[09:21] <jam> it is our way of "lets get this out there, without committing to supporting migration to/from it"
[09:21] <axw> of non-dev
[09:22] <jam> axw: so for pushing secrets directly to the DB, I'm not sure if we can drop it.
[09:22] <jam> "maybe"
[09:22] <jam> we had talked about "if you bootstrap with 1.16 and then *never do anything with it* and then try to connect with 1.17" that might be broken, but we're not sure we care
[09:22] <axw> I really don't think we should
[09:23] <jam> (you can destroy-env & rebotstrap because you have nothing in your env, or you can connect with the 1.16 that you bootstrapped, etc)
[09:23] <axw> indeed
[09:23] <jam> axw: so *I* wouldn't immediately say "when connecting via direct DB access don't pass secrets"
[09:23] <jam> axw: it isn't worth poking that code just to remove it if we don't have to touch it at all
[09:23] <jam> axw: I'd much rather focus on "we never connect to the DB for a 1.18 client and server"
[09:23] <jam> so that instead of having a "and now we don't push secrets" we end up with a "and now we never connect to the DB"
[09:24] <jam> axw: so *probably* we could drop it, but I'd rather get to the point where we can drop NewConnFromName completely
[09:24] <axw> understood
[09:24] <axw> hmm
[09:25] <axw> jam: the reason why I'd like to remove it altogether, is then we can get rid of the idea of secrets
[09:25] <axw> but I can leave it for now I guess
[09:27] <jam> axw: we can't. we still have to not put the secrets into cloud-init, and only pass them once we connect to the bootstrap node
[09:27] <jam> axw: the reason we have them today is to not put them into cloud-init
[09:27] <axw> jam: that bit's irrelevant, we never put any config into cloud-init anymore
[09:27] <axw> (with synchronous bootstrap)
[09:28] <axw> cloud-init now does ssh keys, and that's it
[09:29] <jam> axw: so IMO, changing that isn't our first priority right now. I do like that, but I'd be fine if getting rid of the notion of secrets was in 1.20
[09:29] <axw> mmkay
[09:29] <jam> axw: vs, we actually have to finish stuff like upgrade-charm and status or we can't remove direct DB access in 1.20
[09:31] <jam> axw: and, of course, time spent with sinzui to make sure CI is happy is time very well spent.
[09:31] <axw> jam: indeed. not sure what's going on there :/
[09:31] <axw> I can't reproduce the issues on either canonistack or ec2
[09:32] <jam> axw: I haven't been able to either. there was a comment about connection flakiness for them. Certainly how did we get a machine up and running and then not have SSH configured.
[09:32] <jam> axw: one thing I was wondering
[09:32] <jam> how do we pick the ssh private key to connect with?
[09:32] <jam> If I set "authorized-keys: foobar" in my environments.yaml how do you match that with eg ~/.ssh/id_rsa_ec2key
[09:33] <axw> jam: by default I think it takes id_dsa.pub, id_rsa.pub and identity.pub from ~/.ssh
[09:34] <axw> jam: now sure what you mean by "how do you match that"
[09:34] <axw> I think ssh will just cycle through all the possible private keys on ~/.ssh?
[09:35] <jam> axw: no
[09:35] <jam> it cycles through everything in your ssh-agent
[09:35] <jam> I think
[09:36] <jam> but for direct keys you can configure it in ~/.ssh/config or supply it as "ssh -i PATHTOKEY" or a couple of other ways.
[09:36] <jam> but I'm pretty sure it doesn't just try keys on disk
[09:36] <jam> I could be wrong, and I've overspecified my ssh config
[09:36] <axw> no actually I think you're right
[09:36] <jam> but I was wondering if what sinzui and aaron were seeing was because of keys not getting picked up correctly.
[09:36] <axw> it'll also try those defaults I mentioned
[09:37] <axw> jam: well, that bit hasn't changed at all, so I don't understand how that'd be the case
[09:37] <jam> axw: right, the specific 2/3 private bits
[09:37] <jam> axw: they weren't having Juju drive SSH before
[09:37] <jam> axw: in the test suite
[09:37] <axw> ah, true.
[09:37] <axw> :)
[09:37] <jam> they were copying logs of via scp at one point
[09:37] <jam> but that command *might* be configured specially
[09:38] <jam> the fact that they get "Permission denied (publickey)" sometimes hints that *something* is doing tat.
[09:38] <jam> axw: for Canonistack, I've heard another wrinkel
[09:38] <jam> if you give a Canonistack instance a floating IP
[09:38] <jam> that becomes a world-routable IP, but it is *not* routable within the cloud
[09:38] <jam> so machine A on canonistack has to talk to machine B in canonistack via the private address, and *not* the floating ip
[09:39] <jam> axw: so you might try "use-floating-ip: true" in your canonistack config and see if that breaks bootstrap for you
[09:39] <axw> hurngh
[09:39] <jam> it shouldn't from local
[09:39] <jam> but it might from cstack => cstack
[09:39] <axw> no, but maybe from jenkins
[09:39] <jam> axw: ah, also
[09:40] <jam> you can't directly connect to CStack machines
[09:40]  * jam => lightbulb
[09:40] <axw> yeah, gotta sshuttle
[09:40] <rogpeppe> anyone know anything about amazon request limits?
[09:40] <jam> axw: actually most people configure their SSH to bounce via chinstrap
[09:40] <jam> axw: ProxyCommand ssh chinstrap.canonical.com nc -q0 %h %p
[09:40] <rogpeppe> i'm still trying to get this guy's environment up, and we're seeing "Request limit exceeded" errors
[09:41] <jam> axw: so for CStack abentley and sinzui both probably have that, which would let them SSH to a machine, but would *not* let them Dial a machine
[09:41] <axw> jam: doesn't explain the permission denied error tho
[09:41] <jam> rogpeppe: more that 100/5s is going to trigger their limits, but I don't know what the actual values are (xplod charm was causing amazon request limit exceeded problems before we fixed it)
[09:42] <jam> axw: It would explain not being able to connect, but yes, doesn't explain perm denied
[09:42] <axw> I think you're probably onto something with the private keys/them not using ssh before
[09:42] <jam> axw: connect as "ubuntu" user?
[09:42] <axw> jam: yep
[09:42] <rogpeppe> jam: we're still seeing this error after a night of inactivity
[09:42] <jam> axw: I *think* user@host:port is openssh specific
[09:43] <jam> rogpeppe: whose inactivity :)
[09:43] <jam> rogpeppe: everything shut down?
[09:43] <rogpeppe> jam: well, we stopped jujud-machine-0
[09:43] <jam> rogpeppe: i
[09:43] <rogpeppe> jam: i wonder if the other instances are making ec2 requests
[09:44] <jam> if this was 1.13.2 the User and Machine agents had provider creds, IIRC
[09:44] <rogpeppe> jam: everything is on 1.16.3 AFAIK
[09:44] <jam> so they could have been doing it
[09:44] <jam> rogpeppe: did you actually get it all moved over?
[09:44] <rogpeppe> jam: yes
[09:44] <rogpeppe> jam: but now we're getting this problem
[09:44] <rogpeppe> jam: one issue after another :-(
[09:53] <axw> wow, that ping thread is getting on a bit
[09:54] <rogpeppe> axw: which ping thread?
[09:55] <axw> rogpeppe: warthogs
[09:56] <rogpeppe> axw: ha fun
[09:56] <rogpeppe> axw: i don't look at warthogs much
[09:58] <TheMue> rogpeppe: any chance to take a look at my CL today?
[09:59] <TheMue> rogpeppe: oh, and hello btw
[09:59] <rogpeppe> TheMue: i started on it yesterday, will continue today
[09:59] <TheMue> rogpeppe: ah, looking forward, thx
[09:59] <rogpeppe> TheMue: the main issue i have with it so far is that it always reads from the very start of the file
[10:00] <rogpeppe> TheMue: and for very big files (and log files can be very big) that's a significant waste of resources
[10:01] <TheMue> rogpeppe: I have to admit I stole an algo of you of the newsgroup :)
[10:01] <rogpeppe> TheMue: ha ha
[10:02] <TheMue> rogpeppe: but doing the initial reading from the end indeed seems better
[10:02] <rogpeppe> TheMue: when was that from?
[10:02] <TheMue> rogpeppe: oh, pretty old, would have to look again
[10:02] <TheMue> rogpeppe: but I liked the clean approach
[10:02] <jam> TheMue, rogpeppe: one thought would be that the api could pass in a possible "bytes from start" to give some context as to where it was looking, which might be a negative number to mean from the end of the file ?
[10:03] <rogpeppe> jam: i think it would be more intuitive if the api passed in number of lines of context
[10:03] <jam> rogpeppe: it would, but it is also *very* hard to do efficiently, vs if you had a byte offset hint
[10:03] <jam> it could even just be a hint
[10:03] <rogpeppe> jam: or even a start date
[10:03] <rogpeppe> jam: it's not too hard
[10:04] <rogpeppe> jam: i've done it before, and tail(1) does it without too much difficulty
[10:04] <TheMue> rogpeppe: but even then you would have to find it in the file first. ok, a binary search could hel.
[10:04] <rogpeppe> TheMue: you can't do binary search
[10:04] <rogpeppe> TheMue: but you can read backwards
[10:05] <rogpeppe> TheMue: actually, you could binary search if you're looking for a start date
[10:05] <TheMue> rogpeppe: that's what I meant
[10:05] <rogpeppe> the difficulty with a start date is clock skew
[10:06] <rogpeppe> log lines won't necessarily be in strict date order
[10:06] <rogpeppe> but that might not be too much of a problem in practice
[10:07] <TheMue> rogpeppe: will talk to frankban if that inaccuracy would be ok there
[10:07] <dimitern> rogpeppe, fwereade_, mgz, jam, others interested: PutCharm proposal document for comments https://docs.google.com/a/canonical.com/document/d/1TxnOCLPDqG6y3kCzmUGIkDr0tywXk1XQnHx7G6gO5tI/edit#
[10:07] <rogpeppe> and as for client vs server clocks, you could probably ask for a given duration before the last log message
[10:08] <TheMue> rogpeppe: hey, that's a nice approach, like it
[10:08] <TheMue> rogpeppe: with different operating modes I still can keep the full scan if wanted
[10:09] <rogpeppe> dimitern: looking
[10:11] <jam> TheMue: I think the GUI probably wants "as many old lines as is comfortable to put into the UI" so strictly restricting by date might be unwanted
[10:11] <jam> consider "it failed 2 days ago"
[10:11] <jam> or "it looks failed, when did it fail" ?
[10:11] <jam> So *hinting* to make finding the right value sounds good, but assuming it is just a hint is probably worthwhile
[10:12] <rogpeppe> jam: i think that's probably more of an argument for being able to move back in time
[10:12] <jam> I guess you could just seed "estimated size of line" in the code, and then update that estimation for a given request as you read through stuff and filter it
[10:12] <jam> rogpeppe: sure, but what estimate is 'good' for 100 lines
[10:12] <rogpeppe> jam: i don't think we need to estimate in bytes
[10:13] <jam> rogpeppe: I mean we don't have the API hint, but we use an internal hint
[10:13] <jam> given that you might have a really noisy machine that *isn't* in the filter
[10:13] <jam> or the machine you are reading is noisy itself
[10:14] <rogpeppe> jam: are you suggesting this as an optimisation?
[10:14] <jam> rogpeppe: right
[10:14] <jam> I think we don't need to expose it to the UI, and we do already have "num lines" in both the CLI and in what the GUI would like
[10:14] <jam> that said
[10:14] <rogpeppe> jam: i'm not entirely sure i see how it helps
[10:15] <jam> rogpeppe: I think we can land an unoptimized version as long as we can think of a way to make it better when we need to.
[10:15] <rogpeppe> jam: could you explain?
[10:15] <TheMue> jam: yep, wanted is number of lines
[10:16] <jam> rogpeppe: so we want to get (eg) 100 lines of filtered context for the UI. We start at some place near the end, read and determine how many filtered lines are in that space, and then jump back an estimate based on the number of lines we've found so far.
[10:16] <jam> you could make the default start 1MB from the end of the file, which might catch 90% of the actual cases
[10:16] <jam> but the specific tweaks would all be things that we'd actually need real world testing to fine tune
[10:16] <jam> so lets not optimize too much until we have evidence that it is a problem
[10:16] <rogpeppe> jam: so you're suggesting that we might or might not return the number of lines requested by the user?
[10:17] <rogpeppe> jam: personally i prefer to avoid heuristics when we can
[10:17] <jam> rogpeppe: I'm saying we start with an estimate of where those lines might be, and keep looking until we find them, potentially hitting the beginning of the file
[10:17] <jam> rogpeppe: seeking from the end ==> heuristic about how much you should read in one chunk, etc
[10:18] <rogpeppe> one mo, am just going to check if this guy's environment is actually working
[10:18] <jam> if we do "something" that is ~ reasonable, we get most of the benefit and can drive the rest of the work by actual content
[10:19] <jam> TheMue: so I guess my point is, we know the log file can get to multiple GB, so a small amount of "try to get the answer near the end of the file" is worth implementing. But don't do a lot of work to optimize the code until we actually know it is a problem
[10:19] <TheMue> jam: btw, how often do we log rotate?
[10:20] <jam> TheMue: we don't yet
[10:20] <jam> IIRC natefinch started on something, but he wasn't able to get all files so stopped trying
[10:20] <TheMue> jam: ouch
[10:21] <jam> (he could get all-machines.log to be better, but juju itself kept the log file handle open, so it just kept writing to the rotated place)
[10:21] <jam> TheMue: bug #1191651
[10:21] <_mup_> Bug #1191651: Juju logs don't rotate. <canonical-webops> <canonistack> <logging> <pyjuju:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/1191651>
[10:22] <jam> or bug #1078213
[10:22] <_mup_> Bug #1078213: juju-machine-agent.log/logs are not logrotated <amd64> <apport-bug> <canonical-webops> <canonistack> <logging> <precise> <juju-core:Triaged> <juju (Ubuntu):Triaged> <https://launchpad.net/bugs/1078213>
[10:22] <TheMue> jam: ic, and needs indeed a solution, especially for larger environments
[10:23] <jam> TheMue: sure, and it will probably also end up interfering with debug-log, which we'll want to sort out, but it can be an exercise in the future for now
[10:23] <jam> though I think just rotating all-machine.log would be a big win today
[10:23] <TheMue> jam: yep
[10:23] <jam> even if we aren't rotating everything
[10:24] <jam> all-machines.log is something we can do 'easily' because rsyslog already has hooks for rotating its log files
[10:24] <jam> vs jujud that would need a SIGHUP or something to be added
[10:27] <TheMue> jam: btw, did you looked at the CL so far?
[10:28] <jam> TheMue: only in brief, I didn't review it.
[10:31] <TheMue> jam: ah, ok
[10:31] <rogpeppe> oh darn, another problem encountered.
[10:31] <rogpeppe> 2013-12-04 10:19:20 ERROR juju.provisioner provisioner_task.go:342 cannot start instance for machine "77": cannot set up groups: cannot authorize securityGroup: The permission '36226792-3--1--1' has already been authorized on the specified group (InvalidPermission.Duplicate)
[10:31] <rogpeppe> anyone seen the above error before?
[10:32] <jam> rogpeppe: I saw it once. After I had a permission group and I added ICMP to it. Then destroyed and rebootstrapped
[10:32] <jam> because destroy doesn't delete perm groups
[10:32] <jam> it saw a group that already existed and tried to reconfigure it
[10:32] <jam> apparently in a duplicate fashion
[10:32]  * rogpeppe goes to look at that logic
[10:33] <jam> rogpeppe: I *fixed* it by deleting all the low-numbered juju-ENV-0,1,2 etc groups
[10:33] <rogpeppe> these guys are really seeing the worst of juju
[10:33] <jam> in this case, you'd want to delete juju-ENV-77
[10:33] <jam> juju-ENV-machine-77, I think
[10:36] <rogpeppe> jam: hmm, i guess it might just be an eventual consistency issue
[10:37] <rogpeppe> jam: we revoke first, but perhaps the authorize hasn't seen the initial revoke, so it gives the duplicate error
[10:46] <rogpeppe> jam: FYI i just looked at all their security groups and there's no security group for machine 77
[10:46] <jam> mgz: rogpeppe: TheMue: standup ?
[10:46] <rogpeppe> jam: i think it must be the global group
[10:46] <jam> https://plus.google.com/hangouts/_/calendar/am9obi5tZWluZWxAY2Fub25pY2FsLmNvbQ.mf0d8r5pfb44m16v9b2n5i29ig
[10:47] <jam> could be
[10:48] <mgz> there in a sec
[11:14]  * dimitern gives up on the hangout
[11:24] <jam> mgz: I approved your https://code.launchpad.net/~gz/juju-core/1.16-juju-update-bootstrap-tweak/+merge/196950 but you might want to hold off for a tick
[11:24] <jam> IsStateServer got renamed to IsManager in the old 1.16.4 stuff
[11:24] <jam> so it conflicts there
[11:25] <mgz> wait, we moved the rename back to 1.16 in the end?
[11:25] <jam> mgz: that is part of destroy-machine --force, which is still targetted to the 1.16 series (as in something that customers do need, so we're putting it into the current stable series)
[11:25] <mgz> >_<
[11:26] <mgz> I'll merge and fixup conflicts
[11:26] <jam> mgz: it hasn't landed yet, I accidentally proposed against trukn
[11:28] <mgz> okay, so if I win the race, *you* have to fix it  up? :0
[11:28] <mgz> that was not the correct smilie...
[11:31] <jam> mgz: that is true, but I just marked mine approved
[11:31] <mgz> :D
[11:32] <jam> mgz: well, frank's stuff is in the queue now, so you have a chance
[11:33] <jam> depending on what the bot finds first :)
[11:48] <jam> mgz: mine landed first :)
[11:49] <mgz> I let you win :P
[11:49] <jam> I didn't realize you could do that, what does it look like on your end (Insert horizontal rule)
[11:50] <TheMue> mgz, jam: geeks :D
[12:16]  * TheMue => lunch
[12:52] <axw> hey natefinch. got my xps 15 today - did you have any issues loading 13.10 onto it?
[12:52] <axw> I'm getting weird udev issues at startup :\
[12:52] <axw> about to go back to 12.04 and then upgrade...
[13:00] <natefinch> axw: I just transferred my hard drive from my old machine to this one
[13:01] <natefinch> axw: but that worked fine :D
[13:01] <jam> morning natefinch, I hope you're feeling better
[13:02] <natefinch> jam: got a litle extra sleep, feel like a new man... well, not exactly, but not bad :)
[13:02] <mgz> a new-ish man
[13:03] <natefinch> jam: certainly better than I would have if I'd gotten up at 5:20
[13:05] <axw> natefinch: aha. I was thinking of doing that as a last resort :)
[13:08] <natefinch> axw: sorry, it never occurred to me that installation would be a problem
[13:09] <axw> natefinch: no problems, I'm somewhat used to this :(
[13:09] <axw> sad to say
[13:24] <rogpeppe> lunch
[13:47] <dimitern> rogpeppe, jam, any comments on the PutCharm doc?
[13:48] <dimitern> mgz, if you want to take a look as well?
[13:48] <rogpeppe> dimitern: i will have; juggling a few things currently.
[13:48] <dimitern> rogpeppe, sure, take your time
[13:49] <mgz> dimitern: sure, looking
[13:50] <mgz> geh, google doc links are annoying to transfer across... :)
[14:01] <mgz> dimitern: left a couple of notes
[14:05] <niemeyer> rogpeppe: Any news from that issue?
[14:06] <rogpeppe> niemeyer: we got it all working, but they've decided that juju is not for them in the future, sadly
[14:06] <niemeyer> rogpeppe: Due to that issue, or did they bring up any other reason?
[14:06] <rogpeppe> niemeyer: a number of issues
[14:07] <niemeyer> rogpeppe: Okay, well.. at least we have feedback to work on then
[14:07] <rogpeppe> niemeyer: yes - i am putting together a summary
[14:07] <niemeyer> rogpeppe: Thanks a lot
[14:07] <rogpeppe> niemeyer: i asked if they could summarise their juju experience for us too
[14:07] <rogpeppe> niemeyer: they've put a lot of time into it
[14:12] <dimitern> mgz, cheers
[14:21] <sinzui> jamespage, I think devs are willing to forgo an SRU for saucy for the sake of 1.16.4.
[14:36] <jam> sinzui, jamespage: so 1.16.4 is set in stone (IMO) because it is actually being used in the wild. If we decide we have to revert things/move it to 1.18/whatever we can do so as necessary
[14:36] <jamespage> jam, sinzui: so I guess you would like me to upload it to trusty and do the backports dance right?
[14:37] <jam> jamespage: I think we want to make binaries available, I'm not 100% sure how that has to happen
[14:38] <sinzui> jamespage, yes please :)
[14:40] <jamespage> sinzui, jam: OK _ but as we probably can't sru this point release, it can't go into cloud-tools
[14:49] <jamespage> sinzui, jam: uploaded to trusty and backporting for the stable PPA now
[14:50] <sinzui> Thank you jamespage
[15:08] <abentley> jam: It appears that bug #1257481 applies only when hyphen is used as a separator.  Can you confirm?
[15:08] <_mup_> Bug #1257481: juju destroy-environment destroys other environments <ci> <destroy-environment> <juju-core:In Progress by jameinel> <https://launchpad.net/bugs/1257481>
[15:08] <mgz> abentley: yeah, that seems right
[15:09] <mgz> we match machines on juju-ENVNAME-*
[15:09] <mgz> so, be more creative in your naming :0
[15:09] <jam> abentley: yes
[15:10] <mgz> jam: we can just make the pattern better, right?
[15:10] <mgz> in fact, I think it *was* better in pyjuju
[15:10] <jam> mgz: see the branch associated with the bug
[15:10] <mgz> jam: I see no associated branch
[15:10] <jam> mgz: "juju-ENVNAME-machine-\d*"
[15:11] <jam> mgz: because I fat-fingered it to 1256481
[15:11] <jam> ugh
[15:11] <mgz> jam: you li... right
[15:11] <mgz> reviewing, at any rate :)
[15:12] <jam> mgz: I'm proposing one with the right associated bugs, but the code is the same
[15:12] <jam> then again, it is *one* line :)
[15:12] <mgz> jam: really does seem like we want a test
[15:13] <jam> mgz: how would you write such a test?
[15:13] <jam> mgz: we do still have 1 test that calls AllMachines
[15:13] <jam> and I did manual testing on HP and Canonistack
[15:14] <jam> I can do a whitebox test of what the regex is
[15:14] <jam> but that doesn't really help much
[15:14] <jam> you really need a test in an environment, because the thing interpreting the regex is HP/Canonistack/Havana/etc
[15:14] <jam> mgz: and it doesn't help that our "go test -live" is actually broken right now (something about not having AgentVersion in the config)
[15:15] <mgz> jam: can write a local live test, hook in some extra instances, and assert only sanely named ones are returned by Instances
[15:15] <mgz> doesn't need to be a live live test
[15:16] <mgz> ...but I do see why you want that
[15:16] <jam> mgz: so we can, though it sounds very close to testing that I implemented exactly that, if you have a good way to phrase it that isn't *too* tied to implementaiton, I do think it is good "I didn't screw this up when I refactored code"
[15:17] <mgz> manually nova boot another server with a name that would have matched under the old pattern, but won't under this?
[15:17] <mgz> that way we have a test that fails on the actual bug reported
[15:18] <jam> mgz: reasonable point, please leave it in the review so I remember to do it
[15:19] <mgz> jam: are you putting up a new -cr or should I use that one?
[15:25] <rogpeppe> dimitern, fwereade_: i've added my original thoughts for how we might do charm uploads to the PutCharm proposal.
[15:28] <rogpeppe> dimitern, fwereade_: it seems a bit simpler to me, but there's probably a good reason why it would not work
[15:28] <dimitern> rogpeppe, thanks, i'm reading it now
[15:29] <dimitern> rogpeppe, and commenting
[15:32] <abentley> jam: Thanks for the quick fix.  Looks like it will be really hard to cause a bogus match from here on out.
[15:35] <natefinch> niemeyer: got a second?  Hopefully quick question about replica sets
[15:36] <niemeyer> natefinch: Yo, yep
[15:38] <natefinch> niemeyer: I'm writing tests for my code that configures replica sets, so it brings up some mongo instances with mongod --replset foo etc, but I need a good way to test if they're fully up and ready before adding them to the replica set.  Suggestions?  Would Ping() work?
[15:43] <natefinch> niemeyer: my thought was to put a direct dial and then a ping into a loop, and wait til it succeeds or passes a deadline... but it seems never to succeed
[15:43] <mgz> can I have stamp on codereview.appspot.com/37210044 merge of 1.16 into trunk please?
[15:47] <abentley> jam: It appears that when you create an instance on openstack, you can attach metadata to it.  And when you list instances, you can retrieve that metadata.  This could be a sure-fire way of ensuring you delete only the instances you created, e.g. by storing the env name as metadata.
[15:48] <natefinch> niemeyer: nevermind, I think I just figured it out... forgot to use a monotonic session.
[15:55] <niemeyer> natefinch: Have you seen the LiveServers method?  That's one way too
[15:55] <niemeyer> natefinch: Oh, *before* adding to the RS.. okay, nevermind
[15:56] <natefinch> niemeyer: right, the problem is that I was trying to add them to the replicaset before they were fully up, so mongo would complain that it couldn't add a majority of the servers to the set
[15:56] <rogpeppe> fwereade_, jam, dimitern: any idea how the problem mentioned by iri- in #juju might be happening?
[15:56] <niemeyer> natefinch: Understood
[15:57] <rogpeppe> fwereade_, jam, dimitern: (iri- is peter waller, BTW, the person I've been trying to sort out the environment of)
[16:29] <mgz> rogpeppe: can I have a stamp on the trunk merge of 1.16 please? cr 37210044
[16:29] <rogpeppe> mgz: looking
[16:30] <rogpeppe> mgz: LGTM
[16:30] <mgz> ta!
[16:43] <bac> hi rogpeppe or fwereade_, when bootstrapping juju is there a way to get the same effect of 'default-series' in the environment file via the command-line?  --series does not seem to be it.
[16:44] <rogpeppe> bac:  i think it has to be in the environment config
[16:44] <bac> rogpeppe: that was my conclusion but i'd hoped i'd overlooked something.
[16:48] <fwereade_> bac, it's not ideal, but you can always change default-series with set-env after you've bootstrapped
[16:49] <bac> fwereade_: and that will affect your bootstrap node?
[16:49] <fwereade_> bac, no, it won't
[16:50] <bac> yeah, that would be scary.
[16:50] <fwereade_> bac, it'll affect what charm series is inferred when yu deploy, and what series machines added without --series get, but that should beit
[16:50] <bac> ok, thanks
[16:50] <fwereade_> bac, cheers
[16:52] <jcastro> sinzui, I have a community update on juju to stream today, looking at the milestone for 1.16.5 it's 1 bug away, is it safe for me to say that we'll have .5 out before the holidays?
[16:52] <sinzui> NO way
[16:53] <sinzui> jcastro, I reported that bug weeks ago and no one is working on it. I think 1.16.5 can never happen because it breaks cli compatability
[16:54] <jcastro> ok so I can just reiterate that .4 is where it's at?
[16:55] <sinzui> I think we should hope for 1.17 this month with 1.18 in January
[16:59] <sinzui> damn it. abentley there is an extra leading / in the azure release files that is not in the testing files. All the files were pushed to the wrong location
[16:59]  * sinzui blows a gasket.
[16:59] <abentley> Crap!
[17:11] <sinzui> abentley, I am pushing the files to the correct location. We can talk while everything goes up
[17:14] <TheMue> rogpeppe: you'll get a new push of my CL tomorrow, I'm now changing it to reading from the end first
[17:14] <rogpeppe> TheMue: cool
[17:14] <rogpeppe> TheMue: you got my email?
[17:15] <TheMue> rogpeppe: oh, eh, yes (right now, didn't looked into mail before) :D
[17:16] <abentley> sinzui: relevant? http://162.213.35.54:8080/job/azure-upgrade-deploy/71/console
[17:16] <rogpeppe> TheMue: quite fun that i was still able to find the code that i remembered...
[17:16] <TheMue> rogpeppe: great, my approach looks initially simpler, but with less error control and less generic
[17:17] <TheMue> rogpeppe: will combine both
[17:17] <TheMue> rogpeppe: but have to step out now, visitors
[17:17] <rogpeppe> TheMue: ok, see ya
[17:18] <sinzui> abentley, yes, in fact, it answers what is in my head,
[17:23] <jamespage> sinzui, all the juju-core 1.16.4 packages are built in the juju-packagers PPA btw
[17:24] <sinzui> jamespage, I saw thanks
[18:44]  * rogpeppe is done for the day
[18:44] <rogpeppe> g'night all
[19:44] <thumper> mramm: you around for a quick hangout?
[19:44] <mramm> yea
[19:45] <mramm> can I have 5 min
[19:54] <thumper> sure
[19:54]  * thumper didn't notice the response
[19:54] <thumper> mramm: if you need a few more minutes, I'll go make a coffee
[19:54] <mramm> yea
[19:55] <mramm> need a few more
[19:55] <thumper> ok
[19:55]  * thumper goes to make coffee
[19:58] <natefinch> plugged in a new USB gigabit ethernet adapter today, and laptop froze up twice today, which it's never done before.  Coincidence?
[20:00] <thumper> natefinch: probably not :-)
[20:01] <natefinch> thumper: I didn't really think so :)   Dang... 'cause it's really much more pleasant on ethernet than wifi where my desk is (there's like 3 walls between me and the router, and my signal blows)
[20:02] <thumper> natefinch: we have brick or plaster interrior walls, and that kills the signal.
[20:02] <thumper> ran a cat 6 cable from the office to the dining room
[20:02] <thumper> and have a second access point there
[20:03] <thumper> mramm: for when you are ready https://plus.google.com/hangouts/_/72cpim91vapctd3ad0g1v02aa0?hl=en
[20:03] <natefinch> I actually am planning to run a cable from the basement under my office... there's a phone jack in this room that is sort of hilariously useless, which I plan to replace with an ethernet jack.  Putting an access point in here is a good idea, though.  hadn' thought of that
[20:44] <natefinch> You know one of the things I like best about Canonical?  Sometimes stuff actually gets addressed when I complain.
[21:13] <natefinch> niemeyer: one (hopefully last) mongo question if you have second?
[21:37] <jcastro> heya thumper
[21:37] <jcastro> what's the TLDR on manual provisioning lately?
[21:37] <thumper> jcastro: mostly working
[21:38] <thumper> jcastro: hazmat is using it quite a lot I think
[21:38] <thumper> jcastro: axw is working on making it destroy itself properly
[21:39] <thumper> which is involving moving destroy environment into the API
[21:39] <thumper> which is mostly done I think
[21:39] <thumper> apart from that, I think it is working
[21:39] <thumper> but not really documented
[21:42] <thumper> o/ waigani
[21:45] <waigani> thumper: hello :)
[21:53] <hazinhell> jcastro, its awesome.. after i get out of hell, i've got a cool plugin that you'll like
[21:53] <hazinhell> is there still on-going work on api
[21:54] <hazinhell> deploy is listed as is progress but dimeter is on holiday for a while
[21:54] <hazinhell> mostly just looking for putcharm in the api
[21:55] <hazinhell> jcastro, manual still has some rough edges, there's the manual-provider tag on the bugs
[21:55] <hazinhell> but it works pretty well outside of the rough edges imo.
[21:55] <jcastro> thumper, it's partially documented: https://juju.ubuntu.com/docs/config-manual.html
[21:56] <jcastro> hazinhell, that's good to hear!
[23:09] <hatch> hey does 1.16.4 have the fix for manual provider?