[11:05] rick_h_: I have some git questions, please ping be when you are available === gary_poster|away is now known as gary_poster [12:56] frankban and jujugui generally: I am sorry for the short notice but I have to have another call this morning. I will ping you to reschedule asap, as necessary. hopefully it won't be too big of a disruption [12:57] frankban, fwiw feedback branch merged [12:57] hazmat: great! thanks [12:57] gary_poster: np [12:58] gary_poster: re the quickstart call, I suppose the timeframe we are interested in is from now to the trusty release, right? [12:59] frankban: right [12:59] gary_poster: cool thanks [12:59] np thank you [13:01] gary_poster: rgr [13:01] hazmat: I will work on a follow up branch so that the guiserver can use the new feedback stuff, sounds good? [13:02] frankban, sounds good, i'll hold off on a new release then [13:02] hazmat: cool [13:03] rick_h_: following the git workflow described in the GUI docs, I get this: http://pastebin.ubuntu.com/6802788/ [13:03] frankban: what is the current branch you're on? [13:03] git branch -a [13:03] dying-service [13:03] rick_h_: ^^^ [13:04] frankban: what version of git? [13:04] rick_h_: 1.8.3.2 [13:05] frankban: so like it's saying the branch isn't tracked yet. So you've not pushed it up yet? [13:05] frankban: e.g. git push origin dying-service [13:05] oh hmm, I do see the branch [13:05] rick_h_: this happens before and after the branch push to origin. [13:06] so it did not track it. Ah, there's a snippet in the demo .gitconfig I think I must have for this [13:06] frankban: ok, so even though you pushed it's not tracking. You can fix that with the command from the help in the error [13:06] git branch --set-upstream-to=origin/dying-service dying-service [13:06] in gitconfig I have [push] [13:06] default = tracking [13:07] oh, you do have that. Was just getting ready to paste that [13:08] well, let's see if setting tracking helps first and go from how it's not tracking as a follow up [13:08] if you --set-upstream does the rebase command work? [13:08] rick_h_: trying [13:10] rick_h_: now I have nano opened, do I need to write the new commit message? [13:10] frankban: so it should bring up an editor for you to choose to keep/squash commits [13:10] frankban: after you choose which to keep/squash you then get to adjust your commit message [13:11] frankban: see http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html [13:12] rick_h_: I see this in the editor: http://pastebin.ubuntu.com/6802845/ [13:13] umm, ok... [13:13] try this please [13:13] git rebase -i HEAD~~~~ [13:13] rick_h_: so to abort that I have to delete all lines, right? [13:14] oh hmm, there's not multiple commits [13:14] rick_h_: I did multiple commits [13:14] frankban: no, just quit that editor without aking any changes [13:14] frankban: right, but the line "Rebase 1de177e..1de177e onto 1de177e" [13:14] means that the rebase it was trying to do was all on the same commit, nothing to do [13:14] rick_h_: yeah [13:14] rick_h_: trying "git rebase -i HEAD~~~~" [13:15] does that look better having the last 4 commits in the branch history? [13:16] rick_h_: http://pastebin.ubuntu.com/6802858/ [13:16] rick_h_: the last 3 commits are the relevant ones for my branch [13:16] frankban: ok, and the last 3 are yours? [13:16] rick_h_: yes [13:17] frankban: ok, so you can rebase them here and force push [13:17] frankban: the --autosquash didn't work because you've already pushed all three commits to your remote (origin dying-service) [13:18] frankban: --autosquash tries to help you squash new commits locally that you've done since your last push to help limit your rebase activity to only things you've done. [13:18] so that explains the "Rebase 1de177e..1de177e onto 1de177e" [13:18] and we're still left with the mystery of why it didn't track when you pushed originally [13:19] rick_h_: how do I have to change that document? [13:19] frankban: k, sec [13:19] * rick_h_ hates how our pastebin doesn't have a reply/edit [13:20] frankban: so the notes at the bottom help you figure out what to do. You want to squash the last two commits into the first so it should look like http://paste.ubuntu.com/ [13:20] frankban: when you save that document, it should bring up a new editor window with a diff up top and your changelog history for those 3 commits below [13:20] frankban: and you can edit the commit message to be the complete clean history of what this branch is/does [13:20] rick_h_: the link you sent is the pastebin home page [13:21] frankban: sorry, http://paste.ubuntu.com/6802873/ [13:21] * rick_h_ forgot to put in the username to paste [13:21] rick_h_: so you pick everything including your first commit, and swuash your other commits, right? [13:22] frankban: correct, it works backwards [13:22] so you're "squashing" the extra commits down into the first and left with one clean one [13:23] rick_h_: so, we do this so that the juju-gui develop branch only includes relevant changes, one commit for each branch, right? [13:23] frankban: yes, or more if it makes sense [13:24] but this is the git way of replacing the way bzr would layer it's commits on merge to different branches (trunk) [13:24] frankban: and it's why we work off in our own forks until review is done. [13:25] frankban: so if you look https://github.com/juju/juju-gui/commits/develop you can see I kept two commits for my one branch that landed noting out specifically the IE fix required to land. [13:26] frankban: once you're ok on that and have pushed it up with -f to force overwrite your remote history, please run git config --list and let me know if push.default was picked up right [13:27] rick_h_: cool, ok so now doing "git push -f origin dying-service" [13:27] frankban: correct [13:28] and when you go to https://github.com/frankban/juju-gui/commits/dying-service [13:28] it's cleaned and one commit in the history now [13:28] rick_h_: yes [13:28] which you can use for the pull request to the juju version [13:29] rick_h_: git config --list -> http://paste.ubuntu.com/6802927/ [13:29] frankban: not sure yet when we should reschedule, but we'll do it. rick_h_can call when you are ready [13:30] gary_poster: ack [13:30] gary_poster: rgr, sec let me get it fired up [13:30] frankban: afk for a bit [13:33] hazmat: i made the changes you requested in https://code.launchpad.net/~bac/juju-deployer/parse-constraints/+merge/202674. please merge this branch if you approve. thanks. [13:46] I've been reading a lot of people's pipes freezing because of this new weather.... I don't get it, how do the pipes freeze? Aren't they IN the house? [13:47] or do building practices over there have the pipes in the outside walls or something? [13:54] hatch, basements aren't insulated terribly well, and also the pipes coming into the house are often the problems. When you don't have to worry about really cold weather very often, you don't have good demonstration that everything is working properly very often [13:54] benji no rush but https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.8vsks3qndr814s6te61qlkqn8g when you wanna [13:55] ohh gotcha, yeah our pipes come into the house 8ft underground and then everything runs on the inside walls, that must be the difference [13:55] frankban: off call. did you get the pull request ok or hit any other snags? [14:00] gary_poster: what time is our call today? the email has something different from google cal. [14:00] bac 2pm my time. ok? [14:00] gary_poster: nm, subsequent email is correct [14:00] sorry [14:00] yes, 2 us/east [14:02] at least I'm not the only one getting emails from google rescheduling meetings lol [14:06] rick_h_: on call, will ping later [14:08] evilnickveitch it would be nice if the headers in the juju docs were hrefs so that we could link to their # without having to do it manually llike https://juju.ubuntu.com/docs/charms-deploying.html#local [14:08] s/were/had [14:13] gary_poster: i'd like to do the charm testing stuff today and probably tomorrow. that alright? [14:13] bac sure [14:13] oh right, /me moves my card back and goes to download amulet [14:13] rick_h_: it's on lp but authoritative on github [14:14] bac: yay [14:23] rick_h_: so do i really have to write charm tests until 9pm, per the invitation? [14:23] bac: I plan on cheating and writing a *still hacking...hacking..* bot to make it [14:24] hey, we should collaborate on that. clojure? [14:24] hah, been thinking of trying that [14:36] hatch, sure, not all of the headers have ids, i have been adding them when i refresh pages. [14:36] e.g authors-hook-debug.html [14:37] rick_h_: is there a way to make git remember the remote branch when pushing? [14:38] frankban: my understanding that's what the config is supposed to do [14:38] frankban: I wanted to walk through a test branch and see if we could dupe or see something slightly off in the last branch [14:41] * gary_poster on call now, and out 9:50-10:40 FWIW (that is, I'll be out in 10 minutes, and back about 50 minutes later) [14:47] evilnickveitch cool thanks, I just noticed it today as a nice-to-have :) [14:49] guihelp: could anyone please review and QA https://github.com/juju/juju-gui/pull/85 ? thanks! [14:50] running. biab [14:52] frankban sure, it'll take me a bit, doing a couple things at once here [14:52] hatch: np and thank you [15:01] frankban do you know if in 1.17.1 they changed the output of juju status in local envs? This is what I get after bootstrapping and deploying the gui https://www.evernote.com/shard/s219/sh/e97b60b5-20e8-4d3e-bded-3c0f62efa179/23d15a7ed70c5bdade557146fc08e377 [15:01] 1.17.1 being trunk [15:02] hatch: that does not look right [15:02] haha nope [15:02] maybe I'll ask in juju-dev [15:02] hatch: no idea, worth pinging them [15:18] hatch: didn't we decide to still use isTrue and monkey patch assert? [15:38] frankban I thought that was the last-resort method so that we didn't have to fix all of the old ones [15:39] so any new ones we could do the 'old fashioned way' [15:41] hatch: :-/ [15:41] hey! I'm just trying to save us headaches, blame Chai :P [15:42] boo chai! [15:42] there we go! and I agree [15:42] :) [15:43] heh [15:44] I have a huge craving for a SODA [15:45] i'd say a POP but noone would know what I was talking about :P [15:46] Makyo so I was doing some research lastnight on the vagrant stuff and it looks like the typical file sharing system is the virtual box file sharing not NFS [15:52] jujugui call in 7 btw [15:53] frankban I'm just spinning up a juju env to test your branch right now, shouldn't be much longer [15:54] hatch: cool thanks [15:58] jujugui call in 2 [16:19] rick_h_: will webops grab log files via an IRC request or do they require an RT? [16:19] hatch: POP is a smtp protocol [16:19] bac: I've had good luck just getting vangaurd [16:19] haha [16:19] bac: only once I think I got asked to file an RT [16:19] rick_h_: thx [16:21] gary_poster: I am available for 1on1 anytime, also tomorrow if that works better [16:22] thanks frankban. I moved it to tomorrow at today's time. You can change the time in the calendar if that is bad for you [16:23] gary_poster we have a regression with the inspector and destroying services https://bugs.launchpad.net/juju-gui/+bug/1271986 [16:23] gary_poster: tomorrow same time works for me [16:23] frankban qa done [16:24] hatch: great thanks [16:24] thanks frankban, great [16:24] hatch, looking [16:25] and in other news - people are asking for putcharm :) http://askubuntu.com/questions/409637/deploy-local-charm-from-juju-gui/ [16:25] hatch :-/ do you know if it is in release? my guess is yes [16:26] gary_poster it's in frankban's branch so most likely [16:26] yeah [16:26] not horrible, but me no likey regressions. putting on high list in kanban [16:27] hatch: did not encountered that while QAing my branch :-/ [16:27] pre-existing IIUC [16:27] frankban it might only happen if the service is pending when you destroy it [16:28] not sure [16:28] hatch: never waited for the service to be ready... [16:28] hmm [16:28] well then....it was your branch heh [16:28] hatch: trying it again [16:28] the bug wasn't caused by you though [16:28] hatch: maybe it's intermittent [16:29] oh boy, intermittent failures when the 'test' cycle is 15mins lol [16:29] this is nothing [16:40] gary_poster: just checking: if you were implementing the debug-log stream in a REST interface (not RPC-oriented like the API) what would be the most appropriate way of transferring the data? would a long-lived GET request work OK? perhaps a unidirectional websocket connection would be better? [16:41] gary_poster: i am currently pushing back on the currently mooted design, which uses the normal API for streaming the debug-log data, because it's not a very efficient way of streaming large quantities of data [16:41] rogpeppe you mean via the RPC? [16:42] hatch: yeah - that's the current design, which i don't think is quite right [16:42] I disagree, websockets were designed for exactly this [16:42] hatch: yes, websockets, but not RPC-over-websockets [16:43] hatch: so you'd suggest a unidirectional websocket rather than a long-lived streaming GET request? [16:43] rogpeppe that would be my preference [16:43] hatch: ok, cool. [16:43] I haven't been following the api discussion though [16:43] rogpeppe: interesting question. long-lived GET feels reasonable at first cut. I'd be interested in hearing what the strategy was for both sides protecting themselves from too much content/insufficient read speed. websocket might give more options there [16:44] gary_poster: well, TCP flow control should do the job sufficiently [16:44] although perhaps "drop the connection when problems happen" is the right strategy [16:44] long polling is so....1995 :P [16:44] gary_poster: rogpeppe I'd want to peek at how/if multiple websocket lmitations and such. [16:44] hatch: FWIW I am not able to reproduce the inpector bug (using my branch). what charm did you use? [16:44] hatch: the advantage of long polling is that there's zero protocol overhead above TCP [16:44] it's interesting if we can run a couple at a time as we're monitoring a couple of units at a time, or getting something allows multiple watches over the single websocket [16:45] but really though streaming data like debug-log is what a websocket was pretty much designed for - it's similar to games sending data between the server [16:45] hatch: except that's usually bi-directional [16:45] rick_h_: you should be able to run any number; but you'd use a connection for each stream. [16:45] sure, which our current websocket also is [16:45] which isn't needed (I don't think) [16:46] so you guys would like to open up an additional websocket for each debug-log stream? [16:46] rogpeppe: had to refresh my flow control knowledge. [16:46] hatch: that's my suggestion, yes [16:46] sorry maybe I should just go read the thread [16:46] this is the thread I think? :-) [16:46] rogpeppe hmm [16:46] gary_poster: +1 [16:46] * hatch is processing [16:47] rogpeppe: I don't see an issue with the GET for this use case atm [16:47] websockets seem unnecessary [16:47] gary_poster: yea, my only thing there was we hit limitations in long-poll of 6? per browser in LP? [16:48] I'm trying to recall when it hit a limit but know some people hit it [16:48] was per domain based if I recall [16:48] true, though we already were thinking that we needed a relatively low limit like that [16:48] yes [16:48] ok Im ready [16:48] right, but two browsers doubles that and hits limit faster [16:48] two tabs [16:48] etc [16:48] not sure why you'd have it but want to think on it [16:49] I'm throwing my hat in with websockets because of the limitations rick_h_ mentioned (although I think that's going to be more work for us) [16:49] i'm thinking that if you want to monitor many units/machines, it would be better to have a single stream multiplexing all of them [16:49] yup [16:49] that seems like a safer choice [16:49] if we do that, then we can use the current websocket [16:49] rogpeppe: yea, we were going to look at what that number is. [16:49] hatch: not really [16:49] rick_h_ it's different for every browser [16:49] hatch: the current websocket is strictly RPC-only [16:49] I'm pro second websocket connect from the top of my head [16:50] rogpeppe oh ok, I'm not familiar with how it's setup on the core side [16:50] so second websocket which multiplexes the data? [16:50] hatch: right, that allows requesting watches for units 1-5 [16:50] how would you add a new stream? [16:50] in the multiplexed story [16:50] gary_poster just throwing this out there....that ws could also accept requests [16:50] gary_poster: you'd ask for a new stream with the new set of units/machines [16:50] or the rpc one could handle that [16:51] * hatch wishes he knew more about how core worked [16:51] gary_poster: we'd have to be careful about how to deal with the potential overlapp [16:51] Right, the RPC channel affecting the secondary channel on the fly seems a bit odd [16:51] gary_poster: i don't think that's a good idea [16:51] so how would one open the secondary connection? It would have to be through the rpc no? [16:51] gary_poster: because you might actually be talking to two different API servers [16:52] hatch: the same way charm upload is done now [16:52] feels more natural to let the debug log websocket be bidirectional then [16:52] is that what you are describing rogpeppe? [16:52] rogpeppe http? [16:52] so http get to open a connection then bidirectional websocket after? [16:52] gary_poster: actually, a bi-directional (but not RPC-oriented) websocket connection could work well [16:52] that feels weird [16:52] rogpeppe: +1 [16:52] log socket [16:53] how so hatch? you open a debug log channel, you tell the debug log channel what you want to hear [16:53] hatch: you'd open the websocket connection in exactly the same way you'd open the current API RPC connection (except with a different URL path) [16:53] right sorry my mind was being stupid [16:53] ignore what I just said [16:53] I like this idea [16:53] :) [16:53] :-) [16:54] rogpeppe: so this is trashing existing work, yeah? :-/ [16:54] ok +1 to new non-rpc based bidi websocket [16:54] gary_poster: i'm afraid so [16:54] rogpeppe: is the win really worth it? [16:54] oh poo [16:54] gary_poster: but i really feel that using an RPC-based API is not good [16:54] I didn't think this was already implemented [16:54] almost landed, almost approved by william [16:54] gary_poster: you're usually dealing with a very high latency connection [16:55] gary_poster: and we'll really feel the slowness if every read of a set of debug messages involves a round trip [16:56] i haven't looked at the implementation yet. i'll just do so. the adaptation might be easy, in fact [16:57] rogpeppe: on the face of it, it feels like getting current solution landed and planning other solution later would be reasonable. If this can happen quickly (without slowing things down) then I'm +1; otherwise, if this pushes things out, I'm pretty concerned [16:59] yeah we would like the feature sooner rather than later [17:13] *sigh* I swear something is trying to stop me from finishing this branch [17:28] hey jujugui. Is everyone available for a quick call? Or alternatively, who is available for a quick call? [17:28] i am [17:29] I need a break from trying to build juju-core anyways :) [17:29] I am [17:29] heh [17:29] sure [17:29] yep [17:30] ok cool [17:30] jujugui, https://plus.google.com/hangouts/_/canonical.com/team-call (Makyo and rick_h_ will fill in later--ping me or come on by) [17:33] rogpeppe, whats wrong the log push down the existing websocket connection.. its just multiplexing the connection [17:33] argh... what's wrong with [17:33] hazmat: because if it fills up, then it DOS's the rest of the connection [17:34] rogpeppe, meaning the client wasn't consuming fast enough... which could happen either way.. failure recovery is effectively the same though.. reconnect [17:35] hazmat: no, if the client isn't consuming fast enough and there's a separate connection, there's no problem. TCP flow control will work fine there. [17:35] hazmat: we're talking about a potentially large volume of data here [17:35] hazmat: that will probably be very bursty too [17:37] rogpeppe, this feels a bit theoretical. the gui isn't going to be subscribing to the entire env all the time, but more targeted as a context, it will close flows when the focus changes. [17:37] hazmat: we're going to be using this interface for the command line too [17:38] hazmat: and individual units/machines can produce large quantities of data too [17:38] hazmat: plus this way is *much* simpler [17:38] * Makyo starts sending emails behind gary_poster's back :T [17:39] Makyo: heh [17:42] I couldn't resist: http://tinyurl.com/g-day-countdown [17:42] lol! [17:42] rogpeppe, individual hooks spewing large amounts isn't very normal.. disks are finite... while having a separate stream is nicer for flow, its going to be some rewrite/lost time. not really sure how its simpler isn't the traffic effectively the same? [17:43] rogpeppe, also just curious, is this going to include filtering and subscribing to individual contexts of interest (unit/machine), or the current read from the firehose with replay approach? [17:43] hazmat: if you're talking about using the existing websocket connection to stream data unidirectionally, that's a significant architecture change and would require quite a bit of attention [17:43] hazmat: the former [17:44] hazmat: (i've almost done it) [17:45] rogpeppe, ah.. simpler because it doesn't need to work around the existing json rpc server, ic.. [17:45] hazmat: yeah. [17:45] hazmat: i like having one protocol per connection [17:46] rick_h_: we're seeing this on m.j.c. -- does prod have a different port? ConnectionError: HTTPConnectionPool(host='localhost', port=2464): Max retries exceeded with url: /api/3/bundle/proof (Caused by : [Errno 111] Connection refused) [17:50] frankban still hrere/ [17:51] hatch: almost, maybe, go ahead [17:52] :) frankban I'm wondering if you have ever run into an issue with go where it cannot find some dependencies? cannot find package "code.google.com/p/go.crypto/ssh" for example [17:52] this error comes when I run `go install` and i have run `go get` already [17:52] hatch: weird... [17:53] go get -v? [17:53] yeah, I didn't get this issue in my vm's but on metal it is throwing this [17:53] frankban I'm running it again [17:53] maybe it dropped some before? [17:54] hatch: it's possible... check inside $GOPATH/src/ [17:55] bac: I'm not sure. It'll be in the production.ini [17:55] bac: have them get that file and check the port in the config for the app running [17:55] hatch: it should contain a directory named code.google.com [17:55] bac: but it sounds like it can't hit itself locally? [17:55] frankban ok I'll figure this out, it's past your EOD so you can take off :) [17:55] bac: for proof calls? [18:05] rick_h_: yeah, proof calls. recall this was part of the charmproof subclassing i did to allow us to specify the server? but it uses the config value in the ini file on both sides of the connection. same variable. [18:07] bac: is the variable added into the base that the production.ini is generated to? [18:10] bac: I'm trying to look and see. the ini file in production is generated in the charm hooks to be a combination of a base file + overrides [18:10] rick_h_: i don't understand the question [18:10] bac: and I forget what that generation starts with [18:10] rick_h_: oh, that [18:10] bac: so if a new variable is added to the ini file, it must be added to whatever that hook uses as a base and changed by the overrides in production [18:10] rick_h_: i'm looking to see if that port is exposed to the charm. and if so will have them set it. [18:11] rick_h_: but it was not new [18:11] bac: it shouldn't have to be exposed if it's access via localhost? [18:11] at least I didn't think so [18:11] hazmat: here's the code (well, i haven't actually *run* it) for the log streaming. the deleted code is the currently proposed implementation. https://codereview.appspot.com/56100043/ [18:12] rick_h_: not exposed as in "opened firewall". i mean exposed as part of the charm's config.yaml for juju get and juju set. sorry for bad word choice. [18:12] bac: ah ok [18:15] frankban figured out the issue with the juju-core deps [18:15] oh he's gone :) [18:21] rick_h_: i'm asking for the production.ini file [18:21] on staging proof.port=6543. on production it is trying to use 2464. [18:23] right but what is the service running on on prodstack? Do you have the production.ini? [18:23] [server:main] [18:23] use = egg:Paste#http [18:23] host = 0.0.0.0 [18:23] port = 2464 [18:23] is the default.ini [18:23] staging might change that, wonder if prodstack is on the default port [18:24] gary_poster looks like my issues with juju-core were related to the vm, it appears to be working as expected on metal.... [18:33] rick_h_: yeah, 'port' and 'proof.port' are different in the production.ini file. hmm, seems like a DRY violation ass biting [18:33] bac: rgr [18:33] bac: so have to trace the code in the charm that's generating the file and see where the mixup is and correct [18:34] bac: but a cowboy patch of fixing the ini file on prodstack should get us unstuck [18:34] rick_h_: i *think* I.S. maintains a custom version of the charm with custom production_overrides.ini [18:35] bac: ah, is that part of the 'IS block box' bit we don't see? [18:44] gary_poster: when you get a chance can we catch up on the debug log discussion? it was simu-cast and want to make sure I'm caught up [18:44] heh sure [18:44] will ping soon [18:44] rgr [18:50] thanks rick_h_ === BradCrittenden is now known as bac [18:51] bac: np, sucky to hit a bug in that. It's not had an issue so far and I don't see anything freaky in looking at the config files [19:01] bac, no rush, but I'm ready in https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.b6tepoq090fnj4qfdg23a3okhg when you are [19:01] rick_h_: dang -- http://manage.jujucharms.com/heartbeat [19:02] bac :( [19:12] rick_h_: if it makes things easier, I can take your current pull req for amulet, merge it then merge mine and do a quick conflic resolution [19:12] if that frees you up [19:13] marcoceppi: sec, I've got to finish fixing my sysdeps and add the test deps bits [19:13] k [19:13] marcoceppi: I'll have an update and rebase it up clean in a few min [19:13] <3 [19:18] gary_poster There appears to be some sort of bug in juju-core which is returning a 405 when I attempt to post a charm. If noone gets back to me in juju-dev I'll have to wait until Dimiter gets in in the morning [19:18] hatch ack [19:20] I've also added a card to the needs spec column for fakebackend support. I think we will require the js zip lib which is required for DD'ing of a folder but I figure we should have a call as to weather we even want to support fakebackend support [19:22] hatch, it's not a matter of if we want it, IMO: it's a matter of cost/value. If we can get it done in a few days it is a no-brainer IMO. This is teh kind of thing we want to be able to demo. [19:22] if it takes more than a few days, harder call [19:23] yeah sorry that's what I meant. I don't have any experience with the zip libs in general so hard to say without more research [19:26] rick_h_: sorry dropped out and had a call with gary_poster. did you dig any further? i just asked for the new app.log. [19:27] bac: no, was waiting to hear what you found out sorry [19:27] rick_h_: i'll handle this from here, though. [19:27] bac: let me know if I can help [19:27] ugh, deej disappeared. [19:49] benji fwiw, I think @media max-width was the css thing I was vaguely recalling that I mentioned yesterday: https://github.com/huwshimi/juju-gui/commit/cbd64eaa00ff4d956bf956f9fda3dec51ff68504 [19:49] * benji looks [19:50] ooh, that's nice [19:50] wait, is that the width of the display or the width of the viewport? [19:50] * benji googles. [19:50] viewport I think [19:51] benji: this error should've been solved by your r479 in charmworld, no? [19:51] 2014-01-23 19:12:36,165 ERROR [charm.update-bundle][MainThread] E: opens [19:51] tack: The requested relation nova-ceilometer to nova-ceilometer is incompatible [19:51] bac: nope, the error is real, my branch made the error give you a hint about how to fix it (reverse the order of the relation) [19:52] ok [19:52] my misunderstanding [19:52] benji you found "width" in https://developer.mozilla.org/en-US/docs/Web/Guide/CSS/Media_queries ? "The width media feature describes the width of the rendering surface of the output device (such as the width of the document window, or the width of the page box on a printer)." [19:53] benji: the hint is logged, fwiw [19:53] bac: cool [19:55] Makyo or others: did you happen to have any ideas for Huw with his projector support branch (https://github.com/huwshimi/juju-gui/commit/cbd64eaa00ff4d956bf956f9fda3dec51ff68504) [19:55] " [19:55] Unfortunately [19:55] > while I can scale the content I can't get it to fill the browser. [19:55] " [19:55] ok I think I have found the issue....the post is being made to the gui charm, not to juju-core and not being redirected [19:55] ah [19:55] makes sense [19:56] guess I should have figured that out sooner :/ [19:56] so....anyone want to have a pre-imp on the best approach for this? [19:56] so charm needs change. for now, to not be blocked by that, you might be able to figure out the juju port, expose it and connect to it directly? [19:57] can I do that without changes to the charm? from the browser I don't really have access to anything along those lines [19:58] I could fake it I suppose with a ssh tunnel or something [19:58] well [19:58] call? [19:58] for pure hackety hack there are options, I think. ssh is one [19:58] sure [19:59] https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.j0rk5d371ph8331ijtf48t2uj0?authuser=1 [20:00] rick_h_: i just watched mjc drive the queued charms and bundles to 0/0 [20:01] rick_h_: then it queued back up to ~2350 and 26. [20:01] that all looks normal. i'm not sure why bundles ever are allowed to go over 26 [20:01] but it was at 104 (26 * 4) when i started watching [20:02] bac: well bundles only run after charms right? [20:02] so if charms don't complete then bundles would get backed up over and over? [20:02] rick_h_: yes. but in one process, until completion [20:04] bac: if it's getting through every so often can we look at it in the morning? so it's not 100% broken. [20:04] bac: I want to finish this thing for marco before EOD and I've got to look at the ingest code again. I must be missing how something is working. [20:05] rick_h_: yes, tomorrow is fine. ingest has probably changed a lot since you looked [20:05] we start a process, that fills queues and we empty them one at a time so I would think if we didn't get the charm queue to 0 before the next ingest cycles the bundle ones would get put in a holding pattern [20:10] rick_h_ whenever you get a chance we can hangout and I can tell you the debug log saga AIUI :-) [20:11] gary_poster: sure thing [20:11] gary_poster: https://plus.google.com/hangouts/_/7acpi7muavq74fq2ibqs44nhb0?hl=en [20:11] it can' t be a 'saga' King will sue you for their trademark [20:11] lol [20:21] rick_h_: aha -- staging never got the charm updated. the charmworld app gets updated automatically by CI but the charm has to be one manually. so apples-mangos [20:21] * bac runs off to hopefully break staging [20:22] bac: yes! [20:22] bac: this is true [20:26] rick_h_: dang, false alarm. it is running -57, the latest [20:45] bac: so catchup call in the morning and go from there? I got amulet working (well build/tests pass) and sent a pull request in [20:45] bac: and then we can catch up on wtf charmworld hates now [20:46] rick_h_: a-ok [21:24] gary_poster did you create a card for the gui charm support of putcharm? I don't see one but want to make sure I"m not missing it somewhere [21:24] hatch, no, thought you were. sorry for miscommunication. want me to? [21:24] nope, on it! [21:25] done [21:28] thank you [21:31] gary_poster should I assign it to someone? I could take a peek at it but I'm not so sure my python is up to snuff about proxying https :) [21:32] hatch: assign it to frankban, send him a note about the background, and cc me? [21:32] sure [21:32] thanks [22:10] Morning [22:11] morning huwshimi [22:15] hatch: Do you use vagrant? [22:16] I do [22:16] I also use virtual box and parallels [22:16] lol [22:16] so...many...vms [22:17] did you have a question about it? [22:18] hatch: Trying to get everything set up in os x, just wondering what's easiest? [22:18] yeah the vagrant workflow is by far the easiest [22:19] but the virtual box file sharing is really slow for running things like make lint etc [22:19] at some point I will add the nfs setup to the vagrant but....some time [22:20] hatch: I think this is the first time I've booted into os x since I installed ubuntu :) [22:20] haha, I'm waiting for 14.04 before trying to tackle that on this mbp [22:20] why are you now working in osx? [22:21] hatch: Safari testing [22:21] ahh [22:53] huwshimi the vagrant ip is 192.168.33.10:8888 [22:54] just fyi [22:55] hatch: Ah great [23:02] hatch: What box image do you use? [23:02] umm sec [23:03] huwshimi it should auto set that up when you set up your vagrant image [23:03] from the vagrant file... [23:03] config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/raring/current/raring-server-cloudimg-amd64-vagrant-disk1.box" [23:04] hatch: Oh, so maybe I need to ask what vagrant file you use? [23:04] I'm confused [23:04] so you should install vagrant then virtual box then type `vagrant up` in your gui dir [23:04] so have you installed both vagrant and virtual box? [23:05] yes [23:05] ok so now clone your gui fork [23:05] and navigate to the root dir in that repo [23:05] then type [23:05] vagrant up [23:06] and wait for a while (subsequent startups will be muuuuuch faster because the image is already set up) [23:07] oh I see, we have vagrant specific stuff in our tree [23:11] yup thanks to Makyo 's hard work :) [23:11] Shit, what'd I do? [23:11] Oh. [23:11] in fact in London my vm went totally bonkers and I coudln't connect to it through the network there and vagrant allowed me to actually work haha, so I was happy [23:11] :) [23:11] Makyo don't worry, this time it's a good thing :D [23:12] Whew! [23:12] I accidentally a good thing. [23:14] haha, it was a good idea [23:14] at some point I'll add nfs support to it [23:15] gary_poster huwshimi is the daily call AUS edition on today? [23:15] hatch: I believe so, up to gary_poster I guess [23:15] yes! :-) [23:20] huwshimi do Australians like their sausage? https://twitter.com/rharris334/status/426472243418247168 [23:21] haha [23:22] lol [23:31] huwshimi and jujugui who wanna: https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.dd77sn7kjl6unba21lutdr0p70 [23:32] gary_poster: Haaving to install a plugin. Be right with you. [23:32] håving. [23:33] :) [23:53] huwshimi just for reference http://yuilibrary.com/yui/docs/api/classes/UA.html#property_os [23:53] thanks