[11:05] <frankban> rick_h_: I have some git questions, please ping be when you are available
[12:56] <gary_poster> frankban and jujugui generally: I am sorry for the short notice but I have to have another call this morning.  I will ping you to reschedule asap, as necessary.  hopefully it won't be too big of a disruption
[12:57] <hazmat> frankban, fwiw feedback branch merged
[12:57] <frankban> hazmat: great! thanks
[12:57] <frankban> gary_poster: np
[12:58] <frankban> gary_poster: re the quickstart call, I suppose the timeframe we are interested in is from now to the trusty release, right?
[12:59] <gary_poster> frankban: right
[12:59] <frankban> gary_poster: cool thanks
[12:59] <gary_poster> np thank you
[13:01] <rick_h_> gary_poster: rgr
[13:01] <frankban> hazmat: I will work on a follow up branch so that the guiserver can use the new feedback stuff, sounds good?
[13:02] <hazmat> frankban, sounds good, i'll hold off on a new release then
[13:02] <frankban> hazmat: cool
[13:03] <frankban> rick_h_: following the git workflow described in the GUI docs, I get this: http://pastebin.ubuntu.com/6802788/
[13:03] <rick_h_> frankban: what is the current branch you're on?
[13:03] <rick_h_> git branch -a
[13:03] <frankban> dying-service
[13:03] <frankban> rick_h_: ^^^
[13:04] <rick_h_> frankban: what version of git? 
[13:04] <frankban> rick_h_:  1.8.3.2
[13:05] <rick_h_> frankban: so like it's saying the branch isn't tracked yet. So you've not pushed it up yet?
[13:05] <rick_h_> frankban: e.g. git push origin dying-service
[13:05] <rick_h_> oh hmm, I do see the branch
[13:05] <frankban> rick_h_: this happens before and after the branch push to origin.
[13:06] <rick_h_> so it did not track it. Ah, there's a snippet in the demo .gitconfig I think I must have for this
[13:06] <rick_h_> frankban: ok, so even though you pushed it's not tracking. You can fix that with the command from the help in the error
[13:06] <rick_h_> git branch --set-upstream-to=origin/dying-service dying-service
[13:06] <frankban> in gitconfig I have [push]
[13:06] <frankban>     default = tracking
[13:07] <rick_h_> oh, you do have that. Was just getting ready to paste that
[13:08] <rick_h_> well, let's see if setting tracking helps first and go from how it's not tracking as a follow up
[13:08] <rick_h_> if you --set-upstream does the rebase command work?
[13:08] <frankban> rick_h_: trying
[13:10] <frankban> rick_h_: now I have nano opened, do I need to write the new commit message?
[13:10] <rick_h_> frankban: so it should bring up an editor for you to choose to keep/squash commits 
[13:10] <rick_h_> frankban: after you choose which to keep/squash you then get to adjust your commit message
[13:11] <rick_h_> frankban: see http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html
[13:12] <frankban> rick_h_: I see this in the editor: http://pastebin.ubuntu.com/6802845/
[13:13] <rick_h_> umm, ok...
[13:13] <rick_h_> try this please
[13:13] <rick_h_> git rebase -i HEAD~~~~
[13:13] <frankban> rick_h_: so to abort that I have to delete all lines, right?
[13:14] <rick_h_> oh hmm, there's not multiple commits
[13:14] <frankban> rick_h_: I did multiple commits
[13:14] <rick_h_> frankban: no, just quit that editor without aking any changes
[13:14] <rick_h_> frankban: right, but the line "Rebase 1de177e..1de177e onto 1de177e"
[13:14] <rick_h_> means that the rebase it was trying to do was all on the same commit, nothing to do
[13:14] <frankban> rick_h_: yeah
[13:14] <frankban> rick_h_: trying "git rebase -i HEAD~~~~"
[13:15] <rick_h_> does that look better having the last 4 commits in the branch history?
[13:16] <frankban> rick_h_: http://pastebin.ubuntu.com/6802858/
[13:16] <frankban> rick_h_: the last 3 commits are the relevant ones for my branch
[13:16] <rick_h_> frankban: ok, and the last 3 are yours?
[13:16] <frankban> rick_h_: yes
[13:17] <rick_h_> frankban: ok, so you can rebase them here and force push
[13:17] <rick_h_> frankban: the --autosquash didn't work because you've already pushed all three commits to your remote (origin dying-service)
[13:18] <rick_h_> frankban: --autosquash tries to help you squash new commits locally that you've done since your last push to help limit your rebase activity to only things you've done.
[13:18] <rick_h_> so that explains the "Rebase 1de177e..1de177e onto 1de177e"
[13:18] <rick_h_> and we're still left with the mystery of why it didn't track when you pushed originally
[13:19] <frankban> rick_h_: how do I have to change that document?
[13:19] <rick_h_> frankban: k, sec
[13:19]  * rick_h_ hates how our pastebin doesn't have a reply/edit
[13:20] <rick_h_> frankban: so the notes at the bottom help you figure out what to do. You want to squash the last two commits into the first so it should look like http://paste.ubuntu.com/
[13:20] <rick_h_> frankban: when you save that document, it should bring up a new editor window with a diff up top and your changelog history for those 3 commits below
[13:20] <rick_h_> frankban: and you can edit the commit message to be the complete clean history of what this branch is/does
[13:20] <frankban> rick_h_: the link you sent is the pastebin home page
[13:21] <rick_h_> frankban: sorry, http://paste.ubuntu.com/6802873/
[13:21]  * rick_h_ forgot to put in the username to paste
[13:21] <frankban> rick_h_: so you pick everything including your first commit, and swuash your other commits, right?
[13:22] <rick_h_> frankban: correct, it works backwards
[13:22] <rick_h_> so you're "squashing" the extra commits down into the first and left with one clean one
[13:23] <frankban> rick_h_: so, we do this so that the juju-gui develop branch only includes relevant changes, one commit for each branch, right?
[13:23] <rick_h_> frankban: yes, or more if it makes sense
[13:24] <rick_h_> but this is the git way of replacing the way bzr would layer it's commits on merge to different branches (trunk)
[13:24] <rick_h_> frankban: and it's why we work off in our own forks until review is done. 
[13:25] <rick_h_> frankban: so if you look https://github.com/juju/juju-gui/commits/develop you can see I kept two commits for my one branch that landed noting out specifically the IE fix required to land. 
[13:26] <rick_h_> frankban: once you're ok on that and have pushed it up with -f to force overwrite your remote history, please run git config --list and let me know if push.default was picked up right
[13:27] <frankban> rick_h_: cool, ok so now doing "git push -f origin dying-service"
[13:27] <rick_h_> frankban: correct
[13:28] <rick_h_> and when you go to https://github.com/frankban/juju-gui/commits/dying-service
[13:28] <rick_h_> it's cleaned and one commit in the history now
[13:28] <frankban> rick_h_: yes
[13:28] <rick_h_> which you can use for the pull request to the juju version
[13:29] <frankban> rick_h_: git config --list -> http://paste.ubuntu.com/6802927/
[13:29] <gary_poster> frankban: not sure yet when we should reschedule, but we'll do it.  rick_h_can call when you are ready
[13:30] <frankban> gary_poster: ack
[13:30] <rick_h_> gary_poster: rgr, sec let me get it fired up
[13:30] <rick_h_> frankban: afk for a bit
[13:33] <bac> hazmat: i made the changes you requested in https://code.launchpad.net/~bac/juju-deployer/parse-constraints/+merge/202674.  please merge this branch if you approve.  thanks.
[13:46] <hatch> I've been reading a lot of people's pipes freezing because of this new weather.... I don't get it, how do the pipes freeze? Aren't they IN the house?
[13:47] <hatch> or do building practices over there have the pipes in the outside walls or something?
[13:54] <gary_poster> hatch, basements aren't insulated terribly well, and also the pipes coming into the house are often the problems.  When you don't have to worry about really cold weather very often, you don't have good demonstration that everything is working properly very often
[13:54] <gary_poster> benji no rush but https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.8vsks3qndr814s6te61qlkqn8g when you wanna
[13:55] <hatch> ohh gotcha, yeah our pipes come into the house 8ft underground and then everything runs on the inside walls, that must be the difference
[13:55] <rick_h_> frankban: off call. did you get the pull request ok or hit any other snags?
[14:00] <bac> gary_poster: what time is our call today?  the email has something different from google cal.
[14:00] <gary_poster> bac 2pm my time.  ok?
[14:00] <bac> gary_poster: nm, subsequent email is correct
[14:00] <gary_poster> sorry
[14:00] <bac> yes, 2 us/east
[14:02] <hatch> at least I'm not the only one getting emails from google rescheduling meetings lol
[14:06] <frankban> rick_h_: on call, will ping later
[14:08] <hatch> evilnickveitch it would be nice if the headers in the juju docs were hrefs so that we could link to their # without having to do it manually llike https://juju.ubuntu.com/docs/charms-deploying.html#local
[14:08] <hatch> s/were/had
[14:13] <bac> gary_poster: i'd like to do the charm testing stuff today and probably tomorrow.  that alright?
[14:13] <gary_poster> bac sure
[14:13] <rick_h_> oh right, /me moves my card back and goes to download amulet
[14:13] <bac> rick_h_: it's on lp but authoritative on github
[14:14] <rick_h_> bac: yay
[14:23] <bac> rick_h_: so do i really have to write charm tests until 9pm, per the invitation?
[14:23] <rick_h_> bac: I plan on cheating and writing a *still hacking...hacking..* bot to make it 
[14:24] <bac> hey, we should collaborate on that.  clojure?
[14:24] <rick_h_> hah, been thinking of trying that
[14:36] <evilnickveitch> hatch, sure, not all of the headers have ids, i have been adding them when i refresh pages.
[14:36] <evilnickveitch> e.g authors-hook-debug.html
[14:37] <frankban> rick_h_: is there a way to make git remember the remote branch when pushing?
[14:38] <rick_h_> frankban: my understanding that's what the config is supposed to do
[14:38] <rick_h_> frankban: I wanted to walk through a test branch and see if we could dupe or see something slightly off in the last branch
[14:41]  * gary_poster on call now, and out 9:50-10:40 FWIW (that is, I'll be out in 10 minutes, and back about 50 minutes later)
[14:47] <hatch> evilnickveitch cool thanks, I just noticed it today as a nice-to-have :)
[14:49] <frankban> guihelp: could anyone please review and QA https://github.com/juju/juju-gui/pull/85 ? thanks!
[14:50] <gary_poster> running.  biab
[14:52] <hatch> frankban sure, it'll take me a bit, doing a couple things at once here
[14:52] <frankban> hatch: np and thank you
[15:01] <hatch> frankban do you know if in 1.17.1 they changed the output of juju status in local envs? This is what I get after bootstrapping and deploying the gui https://www.evernote.com/shard/s219/sh/e97b60b5-20e8-4d3e-bded-3c0f62efa179/23d15a7ed70c5bdade557146fc08e377
[15:01] <hatch> 1.17.1 being trunk 
[15:02] <frankban> hatch: that does not look right
[15:02] <hatch> haha nope
[15:02] <hatch> maybe I'll ask in juju-dev
[15:02] <frankban> hatch: no idea, worth pinging them
[15:18] <frankban> hatch: didn't we decide to still use isTrue and monkey patch assert?
[15:38] <hatch> frankban I thought that was the last-resort method so that we didn't have to fix all of the old ones
[15:39] <hatch> so any new ones we could do the 'old fashioned way'
[15:41] <frankban> hatch: :-/ 
[15:41] <hatch> hey! I'm just trying to save us headaches, blame Chai :P
[15:42] <frankban> boo chai!
[15:42] <hatch> there we go! and I agree
[15:42] <hatch> :)
[15:43] <frankban> heh
[15:44] <hatch> I have a huge craving for a SODA
[15:45] <hatch> i'd say a POP but noone would know what I was talking about :P
[15:46] <hatch> Makyo so I was doing some research lastnight on the vagrant stuff and it looks like the typical file sharing system is the virtual box file sharing not NFS
[15:52] <gary_poster> jujugui call in 7 btw
[15:53] <hatch> frankban I'm just spinning up a juju env to test your branch right now, shouldn't be much longer
[15:54] <frankban> hatch: cool thanks
[15:58] <gary_poster> jujugui call in 2
[16:19] <bac> rick_h_: will webops grab log files via an IRC request or do they require an RT?
[16:19] <bac> hatch: POP is a smtp protocol
[16:19] <rick_h_> bac: I've had good luck just getting vangaurd
[16:19] <hatch> haha
[16:19] <rick_h_> bac: only once I think I got asked to file an RT
[16:19] <bac> rick_h_: thx
[16:21] <frankban> gary_poster: I am available for 1on1 anytime, also tomorrow if that works better
[16:22] <gary_poster> thanks frankban.  I moved it to tomorrow at today's time. You can change the time in the calendar if that is bad for you 
[16:23] <hatch> gary_poster we have a regression with the inspector and destroying services https://bugs.launchpad.net/juju-gui/+bug/1271986
[16:23] <frankban> gary_poster: tomorrow same time works for me
[16:23] <hatch> frankban qa done
[16:24] <frankban> hatch: great thanks
[16:24] <gary_poster> thanks frankban, great
[16:24] <gary_poster> hatch, looking
[16:25] <hatch> and in other news - people are asking for putcharm :) http://askubuntu.com/questions/409637/deploy-local-charm-from-juju-gui/
[16:25] <gary_poster> hatch :-/ do you know if it is in release?  my guess is yes
[16:26] <hatch> gary_poster it's in frankban's branch so most likely
[16:26] <gary_poster> yeah
[16:26] <gary_poster> not horrible, but me no likey regressions.  putting on high list in kanban
[16:27] <frankban> hatch: did not encountered that while QAing my branch :-/
[16:27] <gary_poster> pre-existing IIUC
[16:27] <hatch> frankban it might only happen if the service is pending when you destroy it
[16:28] <hatch> not sure
[16:28] <frankban> hatch: never waited for the service to be ready...
[16:28] <hatch> hmm
[16:28] <hatch> well then....it was your branch heh
[16:28] <frankban> hatch: trying it again
[16:28] <hatch> the bug wasn't caused by you though
[16:28] <frankban> hatch: maybe it's intermittent
[16:29] <hatch> oh boy, intermittent failures when the 'test' cycle is 15mins lol
[16:29] <frankban> this is nothing
[16:40] <rogpeppe> gary_poster: just checking: if you were implementing the debug-log stream in a REST interface (not RPC-oriented like the API) what would be the most appropriate way of transferring the data? would a long-lived GET request work OK? perhaps a unidirectional websocket connection would be better?
[16:41] <rogpeppe> gary_poster: i am currently pushing back on the currently mooted design, which uses the normal API for streaming the debug-log data, because it's not a very efficient way of streaming large quantities of data
[16:41] <hatch> rogpeppe you mean via the RPC?
[16:42] <rogpeppe> hatch: yeah - that's the current design, which i don't think is quite right
[16:42] <hatch> I disagree, websockets were designed for exactly this
[16:42] <rogpeppe> hatch: yes, websockets, but not RPC-over-websockets
[16:43] <rogpeppe> hatch: so you'd suggest a unidirectional websocket rather than a long-lived streaming GET request?
[16:43] <hatch> rogpeppe that would be my preference 
[16:43] <rogpeppe> hatch: ok, cool.
[16:43] <hatch> I haven't been following the api discussion though
[16:43] <gary_poster> rogpeppe: interesting question.  long-lived GET feels reasonable at first cut.  I'd be interested in hearing what the strategy was for both sides protecting themselves from too much content/insufficient read speed.  websocket might give more options there
[16:44] <rogpeppe> gary_poster: well, TCP flow control should do the job sufficiently
[16:44] <gary_poster> although perhaps "drop the connection when problems happen" is the right strategy
[16:44] <hatch> long polling is so....1995 :P
[16:44] <rick_h_> gary_poster: rogpeppe I'd want to peek at how/if multiple websocket lmitations and such. 
[16:44] <frankban> hatch: FWIW I am not able to reproduce the inpector bug (using my branch). what charm did you use?
[16:44] <rogpeppe> hatch: the advantage of long polling is that there's zero protocol overhead above TCP
[16:44] <rick_h_> it's interesting if we can run a couple at a time as we're monitoring a couple of units at a time, or getting something allows multiple watches over the single websocket
[16:45] <hatch> but really though streaming data like debug-log is what a websocket was pretty much designed for - it's similar to games sending data between the server
[16:45] <rick_h_> hatch: except that's usually bi-directional
[16:45] <rogpeppe> rick_h_: you should be able to run any number; but you'd use a connection for each stream.
[16:45] <hatch> sure, which our current websocket also is
[16:45] <rick_h_> which isn't needed (I don't think)
[16:46] <hatch> so you guys would like to open up an additional websocket for each debug-log stream?
[16:46] <gary_poster> rogpeppe: had to refresh my flow control knowledge.  
[16:46] <rogpeppe> hatch: that's my suggestion, yes
[16:46] <hatch> sorry maybe I should just go read the thread
[16:46] <gary_poster> this is the thread I think? :-)
[16:46] <hatch> rogpeppe hmm
[16:46] <rogpeppe> gary_poster: +1
[16:46]  * hatch is processing
[16:47] <gary_poster> rogpeppe: I don't see an issue with the GET for this use case atm
[16:47] <gary_poster> websockets seem unnecessary
[16:47] <rick_h_> gary_poster: yea, my only thing there was we hit limitations in long-poll of 6? per browser in LP?
[16:48] <rick_h_> I'm trying to recall when it hit a limit but know some people hit it
[16:48] <rick_h_> was per domain based if I recall
[16:48] <gary_poster> true, though we already were thinking that we needed a relatively low limit like that
[16:48] <gary_poster> yes
[16:48] <hatch> ok Im ready
[16:48] <rick_h_> right, but two browsers doubles that and hits limit faster
[16:48] <rick_h_> two tabs
[16:48] <rick_h_> etc
[16:48] <rick_h_> not sure why you'd have it but want to think on it
[16:49] <hatch> I'm throwing my hat in with websockets because of the limitations rick_h_  mentioned (although I think that's going to be more work for us)
[16:49] <rogpeppe> i'm thinking that if you want to monitor many units/machines, it would be better to have a single stream multiplexing all of them
[16:49] <gary_poster> yup
[16:49] <gary_poster> that seems like a safer choice
[16:49] <hatch> if we do that, then we can use the current websocket
[16:49] <rick_h_> rogpeppe: yea, we were going to look at what that number is. 
[16:49] <rogpeppe> hatch: not really
[16:49] <hatch> rick_h_ it's different for every browser
[16:49] <rogpeppe> hatch: the current websocket is strictly RPC-only
[16:49] <rick_h_> I'm pro second websocket connect from the top of my head
[16:50] <hatch> rogpeppe oh ok, I'm not familiar with how it's setup on the core side
[16:50] <hatch> so second websocket which multiplexes the data?
[16:50] <rick_h_> hatch: right, that allows requesting watches for units 1-5
[16:50] <gary_poster> how would you add a new stream?
[16:50] <gary_poster> in the multiplexed story
[16:50] <hatch> gary_poster just throwing this out there....that ws could also accept requests
[16:50] <rogpeppe> gary_poster: you'd ask for a new stream with the new set of units/machines
[16:50] <hatch> or the rpc one could handle that
[16:51]  * hatch wishes he knew more about how core worked
[16:51] <rogpeppe> gary_poster: we'd have to be careful about how to deal with the potential overlapp
[16:51] <gary_poster> Right, the RPC channel affecting the secondary channel on the fly seems a bit odd
[16:51] <rogpeppe> gary_poster: i don't think that's a good idea
[16:51] <hatch> so how would one open the secondary connection? It would have to be through the rpc no?
[16:51] <rogpeppe> gary_poster: because you might actually be talking to two different API servers
[16:52] <rogpeppe> hatch: the same way charm upload is done now
[16:52] <gary_poster> feels more natural to let the debug log websocket be bidirectional then
[16:52] <gary_poster> is that what you are describing rogpeppe?
[16:52] <hatch> rogpeppe http?
[16:52] <hatch> so http get to open a connection then bidirectional websocket after?
[16:52] <rogpeppe> gary_poster: actually, a bi-directional (but not RPC-oriented) websocket connection could work well
[16:52] <hatch> that feels weird
[16:52] <rick_h_> rogpeppe: +1
[16:52] <rick_h_> log socket
[16:53] <gary_poster> how so hatch?  you open a debug log channel, you tell the debug log channel what you want to hear
[16:53] <rogpeppe> hatch: you'd open the websocket connection in exactly the same way you'd open the current API RPC connection (except with a different URL path)
[16:53] <hatch> right sorry my mind was being stupid
[16:53] <hatch> ignore what I just said
[16:53] <hatch> I like this idea
[16:53] <hatch> :)
[16:53] <gary_poster> :-)
[16:54] <gary_poster> rogpeppe: so this is trashing existing work, yeah? :-/
[16:54] <hatch> ok +1 to new non-rpc based bidi websocket 
[16:54] <rogpeppe> gary_poster: i'm afraid so
[16:54] <gary_poster> rogpeppe: is the win really worth it?
[16:54] <hatch> oh poo
[16:54] <rogpeppe> gary_poster: but i really feel that using an RPC-based API is not good
[16:54] <hatch> I didn't think this was already implemented
[16:54] <gary_poster> almost landed, almost approved by william
[16:54] <rogpeppe> gary_poster: you're usually dealing with a very high latency connection
[16:55] <rogpeppe> gary_poster: and we'll really feel the slowness if every read of a set of debug messages involves a round trip
[16:56] <rogpeppe> i haven't looked at the implementation yet. i'll just do so. the adaptation might be easy, in fact
[16:57] <gary_poster> rogpeppe: on the face of it, it feels like getting current solution landed and planning other solution later would be reasonable.  If this can happen quickly (without slowing things down) then I'm +1; otherwise, if this pushes things out, I'm pretty concerned
[16:59] <hatch> yeah we would like the feature sooner rather than later
[17:13] <hatch> *sigh* I swear something is trying to stop me from finishing this branch
[17:28] <gary_poster> hey jujugui.  Is everyone available for a quick call?  Or alternatively, who is available for a quick call?
[17:28] <hatch> i am
[17:29] <hatch> I need a break from trying to build juju-core anyways :)
[17:29] <frankban> I am
[17:29] <benji> heh
[17:29] <bac> sure
[17:29] <benji> yep
[17:30] <gary_poster> ok cool
[17:30] <gary_poster> jujugui, https://plus.google.com/hangouts/_/canonical.com/team-call (Makyo and rick_h_ will fill in later--ping me or come on by)
[17:33] <hazmat> rogpeppe, whats wrong the log push down the existing websocket connection.. its just multiplexing the connection
[17:33] <hazmat> argh... what's wrong with
[17:33] <rogpeppe> hazmat: because if it fills up, then it DOS's the rest of the connection
[17:34] <hazmat> rogpeppe, meaning the client wasn't consuming fast enough... which could happen either way.. failure recovery is effectively the same though.. reconnect
[17:35] <rogpeppe> hazmat: no, if the client isn't consuming fast enough and there's a separate connection, there's no problem. TCP flow control will work fine there.
[17:35] <rogpeppe> hazmat: we're talking about a potentially large volume of data here
[17:35] <rogpeppe> hazmat: that will probably be very bursty too
[17:37] <hazmat> rogpeppe, this feels a bit theoretical. the gui isn't going to be subscribing to the entire env all the time, but more targeted as a context, it will close flows when the focus changes.
[17:37] <rogpeppe> hazmat: we're going to be using this interface for the command line too
[17:38] <rogpeppe> hazmat: and individual units/machines can produce large quantities of data too
[17:38] <rogpeppe> hazmat: plus this way is *much* simpler
[17:38]  * Makyo starts sending emails behind gary_poster's back :T
[17:39] <gary_poster> Makyo: heh
[17:42] <benji> I couldn't resist: http://tinyurl.com/g-day-countdown
[17:42] <hatch> lol!
[17:42] <hazmat> rogpeppe, individual hooks spewing large amounts  isn't very normal.. disks are finite...  while having a separate stream is nicer for flow, its going to be some rewrite/lost time. not really sure how its simpler isn't the traffic effectively the same?
[17:43] <hazmat> rogpeppe, also just curious, is this going to include filtering and subscribing to individual contexts of interest (unit/machine), or the current read from the firehose with replay approach?
[17:43] <rogpeppe> hazmat: if you're talking about using the existing websocket connection to stream data unidirectionally, that's a significant architecture change and would require quite a bit of attention
[17:43] <rogpeppe> hazmat: the former
[17:44] <rogpeppe> hazmat: (i've almost done it)
[17:45] <hazmat> rogpeppe, ah.. simpler because it doesn't need to work around the existing json rpc server, ic..
[17:45] <rogpeppe> hazmat: yeah.
[17:45] <rogpeppe> hazmat: i like having one protocol per connection
[17:46] <bac> rick_h_: we're seeing this on m.j.c.  -- does prod have a different port? ConnectionError: HTTPConnectionPool(host='localhost', port=2464): Max retries exceeded with url: /api/3/bundle/proof (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
[17:50] <hatch> frankban still hrere/
[17:51] <frankban> hatch: almost, maybe, go ahead
[17:52] <hatch> :) frankban I'm wondering if you have ever run into an issue with go where it cannot find some dependencies? cannot find package "code.google.com/p/go.crypto/ssh" for example
[17:52] <hatch> this error comes when I run `go install` and i have run `go get` already
[17:52] <frankban> hatch: weird...
[17:53] <frankban> go get -v?
[17:53] <hatch> yeah, I didn't get this issue in my vm's but on metal it is throwing this
[17:53] <hatch> frankban I'm running it again
[17:53] <hatch> maybe it dropped some before?
[17:54] <frankban> hatch: it's possible... check inside $GOPATH/src/
[17:55] <rick_h_> bac: I'm not sure. It'll be in the production.ini
[17:55] <rick_h_> bac: have them get that file and check the port in the config for the app running
[17:55] <frankban> hatch: it should contain a directory named code.google.com
[17:55] <rick_h_> bac: but it sounds like it can't hit itself locally?
[17:55] <hatch> frankban ok I'll figure this out, it's past your EOD so you can take off :)
[17:55] <rick_h_> bac: for proof calls?
[18:05] <bac> rick_h_: yeah, proof calls.  recall this was part of the charmproof subclassing i did to allow us to specify the server?  but it uses the config value in the ini file on both sides of the connection.  same variable.
[18:07] <rick_h_> bac: is the variable added into the base that the production.ini is generated to?
[18:10] <rick_h_> bac: I'm trying to look and see. the ini file in production is generated in the charm hooks to be a combination of a base file + overrides
[18:10] <bac> rick_h_: i don't understand the question
[18:10] <rick_h_> bac: and I forget what that generation starts with
[18:10] <bac> rick_h_: oh, that
[18:10] <rick_h_> bac: so if a new variable is added to the ini file, it must be added to whatever that hook uses as a base and changed by the overrides in production
[18:10] <bac> rick_h_: i'm looking to see if that port is exposed to the charm. and if so will have them set it.
[18:11] <bac> rick_h_: but it was not new
[18:11] <rick_h_> bac: it shouldn't have to be exposed if it's access via localhost?
[18:11] <rick_h_> at least I didn't think so
[18:11] <rogpeppe> hazmat: here's the code (well, i haven't actually *run* it) for the log streaming. the deleted code is the currently proposed implementation. https://codereview.appspot.com/56100043/
[18:12] <bac> rick_h_: not exposed as in "opened firewall".    i mean exposed as part of the charm's config.yaml for juju get and juju set.  sorry for bad word choice.
[18:12] <rick_h_> bac: ah ok 
[18:15] <hatch> frankban figured out the issue with the juju-core deps
[18:15] <hatch> oh he's gone :)
[18:21] <bac> rick_h_: i'm asking for the production.ini file
[18:21] <bac> on staging proof.port=6543.  on production it is trying to use 2464.
[18:23] <rick_h_> right but what is the service running on on prodstack? Do you have the production.ini?
[18:23] <rick_h_> [server:main]
[18:23] <rick_h_> use = egg:Paste#http
[18:23] <rick_h_> host = 0.0.0.0
[18:23] <rick_h_> port = 2464
[18:23] <rick_h_> is the default.ini
[18:23] <rick_h_> staging might change that, wonder if prodstack is on the default port
[18:24] <hatch> gary_poster looks like my issues with juju-core were related to the vm, it appears to be working as expected on metal....
[18:33] <bac> rick_h_: yeah, 'port' and 'proof.port' are different in the production.ini file.  hmm, seems like a DRY violation ass biting
[18:33] <rick_h_> bac: rgr
[18:33] <rick_h_> bac: so have to trace the code in the charm that's generating the file and see where the mixup is and correct
[18:34] <rick_h_> bac: but a cowboy patch of fixing the ini file on prodstack should get us unstuck
[18:34] <bac> rick_h_: i *think*  I.S. maintains a custom version of the charm with custom production_overrides.ini
[18:35] <rick_h_> bac: ah, is that part of the 'IS block box' bit we don't see?
[18:44] <rick_h_> gary_poster: when you get a chance can we catch up on the debug log discussion? it was simu-cast and want to make sure I'm caught up
[18:44] <gary_poster> heh sure
[18:44] <gary_poster> will ping soon
[18:44] <rick_h_> rgr
[18:50] <BradCrittenden> thanks rick_h_
[18:51] <rick_h_> bac: np, sucky to hit a bug in that. It's not had an issue so far and I don't see anything freaky in looking at the config files
[19:01] <gary_poster> bac, no rush, but I'm ready in https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.b6tepoq090fnj4qfdg23a3okhg when you are
[19:01] <bac> rick_h_: dang -- http://manage.jujucharms.com/heartbeat
[19:02] <rick_h_> bac :(
[19:12] <marcoceppi> rick_h_: if it makes things easier, I can take your current pull req for amulet, merge it then merge mine and do a quick conflic resolution
[19:12] <marcoceppi> if that frees you up
[19:13] <rick_h_> marcoceppi: sec, I've got to finish fixing my sysdeps and add the test deps bits
[19:13] <marcoceppi> k
[19:13] <rick_h_> marcoceppi: I'll have an update and rebase it up clean in a few min
[19:13] <marcoceppi> <3
[19:18] <hatch> gary_poster There appears to be some sort of bug in juju-core which is returning a 405 when I attempt to post a charm. If noone gets back to me in juju-dev I'll have to wait until Dimiter gets in in the morning
[19:18] <gary_poster> hatch ack
[19:20] <hatch> I've also added a card to the needs spec column for fakebackend support. I think we will require the js zip lib which is required for DD'ing of a folder but I figure we should have a call as to weather we even want to support fakebackend support
[19:22] <gary_poster> hatch, it's not a matter of if we want it, IMO: it's a matter of cost/value.  If we can get it done in a few days it is a no-brainer IMO.  This is teh kind of thing we want to be able to demo.
[19:22] <gary_poster> if it takes more than a few days, harder call
[19:23] <hatch> yeah sorry that's what I meant. I don't have any experience with the zip libs in general so hard to say without more research
[19:26] <bac> rick_h_: sorry dropped out and had a call with gary_poster.  did you dig any further?  i just asked for the new app.log.
[19:27] <rick_h_> bac: no, was waiting to hear what you found out sorry
[19:27] <bac> rick_h_: i'll handle this from here, though.
[19:27] <rick_h_> bac: let me know if I can help
[19:27] <bac> ugh, deej disappeared.
[19:49] <gary_poster> benji fwiw, I think @media max-width was the css thing I was vaguely recalling that I mentioned yesterday: https://github.com/huwshimi/juju-gui/commit/cbd64eaa00ff4d956bf956f9fda3dec51ff68504
[19:49]  * benji looks
[19:50] <benji> ooh, that's nice
[19:50] <benji> wait, is that the width of the display or the width of the viewport?
[19:50]  * benji googles.
[19:50] <gary_poster> viewport I think
[19:51] <bac> benji: this error should've been solved by your r479 in charmworld, no?
[19:51] <bac> 2014-01-23 19:12:36,165 ERROR [charm.update-bundle][MainThread]         E: opens
[19:51] <bac> tack: The requested relation nova-ceilometer to nova-ceilometer is incompatible
[19:51] <benji> bac: nope, the error is real, my branch made the error give you a hint about how to fix it (reverse the order of the relation)
[19:52] <bac> ok
[19:52] <bac> my misunderstanding
[19:52] <gary_poster> benji you found "width" in https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Media_queries ?  "The width media feature describes the width of the rendering surface of the output device (such as the width of the document window, or the width of the page box on a printer)."
[19:53] <bac> benji: the hint is logged, fwiw
[19:53] <benji> bac: cool
[19:55] <gary_poster> Makyo or others: did you happen to have any ideas for Huw with his projector support branch (https://github.com/huwshimi/juju-gui/commit/cbd64eaa00ff4d956bf956f9fda3dec51ff68504)
[19:55] <gary_poster> "
[19:55] <gary_poster>  Unfortunately
[19:55] <gary_poster> > while I can scale the content I can't get it to fill the browser.
[19:55] <gary_poster> "
[19:55] <hatch> ok I think I have found the issue....the post is being made to the gui charm, not to juju-core and not being redirected
[19:55] <gary_poster> ah
[19:55] <gary_poster> makes sense
[19:56] <hatch> guess I should have figured that out sooner :/ 
[19:56] <hatch> so....anyone want to have a pre-imp on the best approach for this?
[19:56] <gary_poster> so charm needs change.  for now, to not be blocked by that, you might be able to figure out the juju port, expose it and connect to it directly?
[19:57] <hatch> can I do that without changes to the charm? from the browser I don't really have access to anything along those lines
[19:58] <hatch> I could fake it I suppose with a ssh tunnel or something
[19:58] <gary_poster> well
[19:58] <hatch> call?
[19:58] <gary_poster> for pure hackety hack there are options, I think.  ssh is one
[19:58] <gary_poster> sure
[19:59] <hatch> https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.j0rk5d371ph8331ijtf48t2uj0?authuser=1
[20:00] <bac> rick_h_: i just watched mjc drive the queued charms and bundles to 0/0
[20:01] <bac> rick_h_: then it queued back up to ~2350 and 26.
[20:01] <bac> that all looks normal.  i'm not sure why bundles ever are allowed to go over 26
[20:01] <bac> but it was at 104 (26 * 4) when i started watching
[20:02] <rick_h_> bac: well bundles only run after charms right?
[20:02] <rick_h_> so if charms don't complete then bundles would get backed up over and over?
[20:02] <bac> rick_h_: yes.  but in one process, until completion
[20:04] <rick_h_> bac: if it's getting through every so often can we look at it in the morning? so it's not 100% broken.
[20:04] <rick_h_> bac: I want to finish this thing for marco before EOD and I've got to look at the ingest code again. I must be missing how something is working. 
[20:05] <bac> rick_h_: yes, tomorrow is fine.  ingest has probably changed a lot since you looked
[20:05] <rick_h_> we start a process, that fills queues and we empty them one at a time so I would think if we didn't get the charm queue to 0 before the next ingest cycles the bundle ones would get put in a holding pattern
[20:10] <gary_poster> rick_h_ whenever you get a chance we can hangout and I can tell you the debug log saga AIUI :-)
[20:11] <rick_h_> gary_poster: sure thing
[20:11] <rick_h_> gary_poster: https://plus.google.com/hangouts/_/7acpi7muavq74fq2ibqs44nhb0?hl=en
[20:11] <hatch> it can' t be a 'saga' King will sue you for their trademark
[20:11] <hatch> lol
[20:21] <bac> rick_h_: aha -- staging never got the charm updated.  the charmworld app gets updated automatically by CI but the charm has to be one manually.  so apples-mangos
[20:21]  * bac runs off to hopefully break staging
[20:22] <rick_h_> bac: yes!
[20:22] <rick_h_> bac: this is true
[20:26] <bac> rick_h_: dang, false alarm.  it is running -57, the latest
[20:45] <rick_h_> bac: so catchup call in the morning and go from there? I got amulet working (well build/tests pass) and sent a pull request in
[20:45] <rick_h_> bac: and then we can catch up on wtf charmworld hates now
[20:46] <bac> rick_h_: a-ok
[21:24] <hatch> gary_poster did you create a card for the gui charm support of putcharm? I don't see one but want to make sure I"m not missing it somewhere
[21:24] <gary_poster> hatch, no, thought you were.  sorry for miscommunication.  want me to?
[21:24] <hatch> nope, on it!
[21:25] <hatch> done
[21:28] <gary_poster> thank you
[21:31] <hatch> gary_poster should I assign it to someone? I could take a peek at it but I'm not so sure my python is up to snuff about proxying https :)
[21:32] <gary_poster> hatch: assign it to frankban, send him a note about the background, and cc me?
[21:32] <hatch> sure
[21:32] <gary_poster> thanks
[22:10] <huwshimi> Morning
[22:11] <hatch> morning huwshimi 
[22:15] <huwshimi> hatch: Do you use vagrant?
[22:16] <hatch> I do
[22:16] <hatch> I also use virtual box and parallels
[22:16] <hatch> lol
[22:16] <hatch> so...many...vms
[22:17] <hatch> did you have a question about it?
[22:18] <huwshimi> hatch: Trying to get everything set up in os x, just wondering what's easiest?
[22:18] <hatch> yeah the vagrant workflow is by far the easiest 
[22:19] <hatch> but the virtual box file sharing is really slow for running things like make lint etc
[22:19] <hatch> at some point I will add the nfs setup to the vagrant but....some time
[22:20] <huwshimi> hatch: I think this is the first time I've booted into os x since I installed ubuntu :)
[22:20] <hatch> haha, I'm waiting for 14.04 before trying to tackle that on this mbp
[22:20] <hatch> why are you now working in osx?
[22:21] <huwshimi> hatch: Safari testing
[22:21] <hatch> ahh
[22:53] <hatch> huwshimi the vagrant ip is 192.168.33.10:8888
[22:54] <hatch> just fyi
[22:55] <huwshimi> hatch: Ah great
[23:02] <huwshimi> hatch: What box image do you use?
[23:02] <hatch> umm sec
[23:03] <hatch> huwshimi it should auto set that up when you set up your vagrant image
[23:03] <hatch> from the vagrant file...
[23:03] <hatch> config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box"
[23:04] <huwshimi> hatch: Oh, so maybe I need to ask what vagrant file you use?
[23:04] <huwshimi> I'm confused
[23:04] <hatch> so you should install vagrant then virtual box then type `vagrant up` in your gui dir
[23:04] <hatch> so have you installed both vagrant and virtual box?
[23:05] <huwshimi> yes
[23:05] <hatch> ok so now clone your gui fork
[23:05] <hatch> and navigate to the root dir in that repo
[23:05] <hatch> then type
[23:05] <hatch> vagrant up
[23:06] <hatch> and wait for a while (subsequent startups will be muuuuuch faster because the image is already set up) 
[23:07] <huwshimi> oh I see, we have vagrant specific stuff in our tree
[23:11] <hatch> yup thanks to Makyo 's hard work :)
[23:11] <Makyo> Shit, what'd I do?
[23:11] <Makyo> Oh.
[23:11] <hatch> in fact in London my vm went totally bonkers and I coudln't connect to it through the network there and vagrant allowed me to actually work haha, so I was happy
[23:11] <hatch> :)
[23:11] <hatch> Makyo don't worry, this time it's a good thing :D
[23:12] <Makyo> Whew!
[23:12] <Makyo> I accidentally a good thing.
[23:14] <hatch> haha, it was a good idea
[23:14] <hatch> at some point I'll add nfs support to it
[23:15] <hatch> gary_poster huwshimi  is the daily call AUS edition on today?
[23:15] <huwshimi> hatch: I believe so, up to gary_poster I guess
[23:15] <gary_poster> yes! :-)
[23:20] <hatch> huwshimi do Australians like their sausage? https://twitter.com/rharris334/status/426472243418247168
[23:21] <huwshimi> haha
[23:22] <hatch> lol
[23:31] <gary_poster> huwshimi and jujugui who wanna: https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.dd77sn7kjl6unba21lutdr0p70
[23:32] <huwshimi> gary_poster: Haaving to install a plugin. Be right with you.
[23:32] <Makyo> håving.
[23:33] <huwshimi> :)
[23:53] <hatch> huwshimi just for reference http://yuilibrary.com/yui/docs/api/classes/UA.html#property_os
[23:53] <huwshimi> thanks