[12:57] <frankban> rick_h_: ping
[12:57] <rick_h_> frankban: pong
[12:58] <frankban> rick_h_: could you please take a look at my comments on https://codereview.appspot.com/52080044/ ?
[12:58] <frankban> rick_h_: (and thanks for your great QA)
[12:58] <rick_h_> frankban: looking
[12:58] <rick_h_> sorry, slow start this morning
[12:59] <frankban> rick_h_: np, just re-proposed
[13:02] <rick_h_> frankban: sounds good for the most part. I'm a very cli heavy person so I often found the quickstart very chatty overall which I ack is a personal preference kind of thing. 
[13:03] <rick_h_> frankban: I'll try to dupe the one where I created reran quickstart and went to 'use' an environment and it went back through the install ppa, etc process
[13:03] <rick_h_> and very cool on "f you accidentally removed a
[13:03] <rick_h_> bootstrapped environment from the envs.yaml file
[13:03] <rick_h_> you can still destroy it passing its name to
[13:03] <rick_h_> `juju destroy-environment`. "
[13:03] <rick_h_> I wasn't aware of that
[13:03] <frankban> rick_h_: heh, I believe we are not exactly the target for quickstart ;-) or at least, it is intended to be used also by others
[13:04] <rick_h_> frankban: understood...but but but :)
[13:04] <rick_h_> frankban: on the plus side I asked jcastro and lazypower looked at it as well to try to get some more QA
[13:04] <rick_h_> and they had nothing but "Awesome!" and "shipit!" to say about it
[13:04] <frankban> rick_h_: awesome, good move! thanks!
[13:04] <rick_h_> and I was trying to be hyper critical due to 1.0 and such
[13:05] <frankban> rick_h_: could you please run the interactive session again to check UI changes (after pulling the branch)?
[13:05] <rick_h_> frankban: so let me get my coffee going and I'll try to reproduce the one issue, then update and rerun some QA on the updates. And thanks for those. 
[13:05] <frankban> cool
[13:05] <frankban> thank you
[13:41] <rick_h_> frankban: ok, I can't dupe my thing with the ppa now. I'm not sure how I hit that. I did end up hanging quickstart because my lxc launch errored and quickstart hung with bringing up the gui
[13:42] <hatch> morning
[13:42] <rick_h_> frankban: not sure if we care or if this is just a corner case http://paste.ubuntu.com/6762128/
[13:42] <frankban> rick_h_: what's the last message from quickstart?
[13:43] <rick_h_> frankban: sorry, terminal is cleared when I ctrl-c'd quickstart. It was about 'bringing up juju-gui'
[13:43] <rick_h_> during the deploy the gui step
[13:43] <rick_h_> but it seemed to hang
[13:44] <rick_h_> I'm having some issues in QA as lxc and trusty have some issues atm
[13:45] <frankban> rick_h_: ok, does this happen also in a fresh lxc install?
[13:45] <frankban> rick_h_: I mean trusty + lxc
[13:50] <rick_h_> frankban: I'm not sure. I've got a few different kinds of errors I'm working around atm
[13:51] <frankban> rick_h_: could you please reproduce the error running quickstart with --debug?
[13:51] <frankban> rick_h_: and paste the output (warning: password is in the debug output)?
[13:52] <frankban> rick_h_: if that error is included in the megawatcher data it could be trivial to avoid quickstart to hang
[13:53] <rick_h_> frankban: sure thing. Let me try to set it up again
[13:53] <frankban> rick_h_: thanks
[13:57] <rick_h_> frankban: yea, looks like there is some api stuff. This is a slightly different error. I've so confused my local lxc with all this testing. https://pastebin.canonical.com/103053/
[14:04] <frankban> rick_h_: I see. So the machine is in an error state. Quickstart only watches the unit. This is not related to this branch, but it maybe should be handled by another branch before 1.0. What do you think?
[14:04] <rick_h_> frankban: sounds ok to me. Seems like this might get us too many chasing error reports. 
[14:06] <frankban> rick_h_: this should not happen, but when it does, quickstart effectively hangs, because the unit will be forever in an "pending" state
[14:06] <rick_h_> frankban: right, and I've done it in two ways in an admittingly broken ways
[14:08] <frankban> rick_h_: ok, so 1.0 can wait for this to be handled, and it will be my next card. if you agree, I;ll ask you to reproduce the machine error again later, to check quickstart no longer hangs.
[14:09] <rick_h_> frankban: cool
[14:09] <rick_h_> I live to fail for you :)
[14:09] <frankban> rick_h_: :-) other comments?
[14:10] <rick_h_> frankban: going back through the email
[14:12] <rick_h_> frankban: looks good. the newline from the initial landing UI before the first bullet isn't there, but that's a tiny thing. 
[14:12] <frankban> rick_h_: I see it
[14:13] <frankban> rick_h_: I mean, I see the new line here
[14:13] <frankban> rick_h_: before "automatically create and bootstrap a local environment"
[14:14] <rick_h_> frankban: correct
[14:14] <rick_h_> oh hmm, maybe terminal differences then
[14:15] <frankban> rick_h_: do you see "new Amazon EC2 environment" below?
[14:15] <frankban> (just to check I correctly pushed all the changes)
[14:17] <rick_h_> frankban: nvm, ignore me
[14:17] <rick_h_> I see it
[14:17] <frankban> rick_h_: ok
[14:34] <hatch> oo boy I love bugs that only show up on prod
[14:36] <rick_h_> wheee
[14:36] <rick_h_> the nice thing is that with the charm now you can run 'prod' but uncompressed files I think.
[14:37] <hatch> well I can dupe locally so that's nice
[14:51] <hatch> rick_h_ https://codereview.appspot.com/52790043/diff/60001/hooks/utils.py isn't git available already on juju instances?
[14:52] <rick_h_> hatch: doing reviewer comments atm
[14:52] <rick_h_> hatch: if so then it should be fine, but not 100% sure
[14:52] <rick_h_> if you create an lxc container env is it?
[14:53] <hatch> well I was pretty sure that juju used git to keep track of something-or-other 
[14:53] <rick_h_> I didn't realize that at all
[14:56] <hatch> so including it probably doesn't hurt anything either :)
[14:56] <rick_h_> explicit > implicit says the Python heroes of old
[14:58] <rick_h_> jujugui so I've got the charm supporting git branch up for rewview with qa instructions and reviewer comments. https://codereview.appspot.com/52790043/
[14:58] <rick_h_> any takers? /me looks at hatch 
[14:58] <hatch> Python....Python....that's the language they replaced with Ruby right?
[14:58] <hatch> :P
[14:58] <bac> rick_h_: you need one or two?  i'll do one.
[14:59] <rick_h_> bac: I think I just need one.
[14:59] <bac> rick_h_: well i'll be glad to unless you want hatch.rb to do it
[15:00] <hatch> if bac can take it that would be awesome, I'm trying to track down a bug in browser.js 
[15:00] <hatch> in compressed files
[15:00] <rick_h_> bac: I appreciate a true developers insight into the review
[15:00] <hatch> lol
[15:00] <bac> rick_h_: i thought you might
[15:01] <bac> rick_h_: let me get this proposal written first (warning)
[15:01] <rick_h_> bac: np, thanks
[15:03] <hatch> ugh consolemanager code can die in a fire
[15:08] <bac> benji: is your bundle branch likely to be finished today?  i'd like to do a release of charmworld this afternoon to avoid the friday disappointment.
[15:09] <jcastro> hey rick_h_
[15:10] <jcastro> do you know the tldr on charm store pages?
[15:10] <jcastro> like how you guys were going to bust them out to be separate from the gui?
[15:10] <rick_h_> 'charm store pages'?
[15:10] <rick_h_> jcastro: tldr is that it's back burnered and possibly part of other things that will solve the problem in different ways
[15:10] <jcastro> ok
[15:10] <rick_h_> from people way high up
[15:10] <jcastro> any word on SEO/URL fixes then?
[15:10] <jcastro> here's our problem
[15:10] <rick_h_> no
[15:11] <jcastro> we're doing a charm audit, and we can't find our charms
[15:11] <jcastro> so, I end up on an out of date github imported page
[15:11] <rick_h_> understood
[15:11] <rick_h_> I think it's been thought that the cleaned up pages on manage.jujucharms.com are a 'good enough' stop
[15:11] <jcastro> hmm, should I whine to escalate? If we can't find our own charms how are users going to find them
[15:12] <rick_h_> if that's not true, you can bring it up and such, but there's nothing planned atm
[15:12] <rick_h_> at least that I'm aware of, I don't know if some other team/etc is thinking or looking into it. 
[15:12] <jcastro> ok
[15:12] <jcastro> I'll bring it up at the cross team
[15:13] <rick_h_> sounds good
[15:13] <jcastro> which I think is today?
[15:13] <rick_h_> no idea, gary is out until tomorrow so if it's today he won't be there from us
[15:13] <lazypower> yeah
[15:13] <lazypower> in about an hour and a half
[15:13] <jcastro> ok no worries, it's in an hour and 15
[15:13] <bac> oh, benji isn't here today is he?
[15:13] <jcastro> rick_h_, is URL-niceness part of that work or is that a different thing?
[15:13] <rick_h_> bac: oh right, he's out until monday
[15:14] <bac> i guess his work won't land.  pfft.
[15:14] <rick_h_> jcastro: quick call?
[15:14] <rick_h_> bac: yea, sorry. he chatted with me about handing it off, but since everyone is out we thought it could wait
[15:14] <rick_h_> sorry to mess up the deploy 
[15:14] <jcastro> rick_h_, yeah fire it up!
[15:14] <bac> rick_h_: not messed up.  i'll go ahead with my stuff.
[15:15] <bac> rick_h_: or i can pick up his branch this afternoon.
[15:45] <bac> rick_h_: when you have time could you review https://codereview.appspot.com/51010046/ ?
[15:45] <hatch> oh man this bug that I'm working on has existed forever
[15:45] <hatch> heh oops
[15:45] <rick_h_> bac: sure thing
[15:50] <hatch> jujugui call in 10
[15:56] <hatch> ugh yet another double dispatch bug
[15:56] <hatch> can we just start over?
[15:56] <hatch> lol
[15:58] <hatch> jujugui call in 2
[16:00] <hatch> hmm apparently hangouts hates me
[16:22] <rick_h_> hatch: hah, but we managed to keep you connected the whole time
[16:22] <rick_h_> your interwebs are strange up there in canada
[16:23] <hatch> lol
[16:23] <hatch> I just rebooted my router, running on hotspot now
[16:23] <hatch> hopefully the reboot fixes it
[16:51] <frankban> rick_h_: before I proceed, could you please check that lxc/trusty error using lp:~frankban/juju-quickstart/handle-machine-errors (it's just a prototype).
[16:52] <rick_h_> frankban: sec, yep
[16:52] <frankban> rick_h_: thanks
[16:57] <rick_h_> frankban: ever seen ERROR TLS handshake failed: x509: certificate signed by unknown authority ?
[16:59] <hatch> rick_h_ you've been h4z0r3d
[17:01] <frankban> rick_h_: never
[17:01] <rick_h_> bah, I can't fail the same way twice wheeeee
[17:02] <hatch> lol
[17:04] <frankban> rick_h_: was that handled by quickstart?
[17:05] <rick_h_> frankban: no, I had this before but got around it somehow
[17:05] <rick_h_> now I can't seem to get around it
[17:05] <rick_h_> asking in #juju about it.
[17:06] <frankban> rick_h_: maybe removing the jenv file?
[17:08] <rick_h_> juju-quickstart: error: machine 1 is in an error state: error: container "rharding-test2-machine-1" is already created
[17:08] <rick_h_> frankban: ^
[17:08] <rick_h_> looks good to me
[17:08] <frankban> rick_h_: great
[17:09] <frankban> thanks
[17:18] <jcastro> rick_h_, now that I have seen
[17:18] <jcastro> juju leaving containers around
[17:18] <rick_h_> jcastro: yep
[17:18] <rick_h_> frankban: is making quickstart watch for stuff like that so it doesn't hang for the user
[17:18] <rick_h_> but errors out properly
[17:18] <hatch> jcastro hey I have had a request for bundle level configuration options.... say you have a bundle and you want to deploy it in devel/debug/prod modes 
[17:19] <hatch> anyone brought anything like that up yet?
[17:19] <hatch> right now you would need 3 bundles
[17:19] <rick_h_> bundle inheritance?
[17:19] <rick_h_> bac and benji were getting that working in proof/charmworld
[17:19] <hatch> imho it sounds like it could be doen with 'stacks' 
[17:20] <hatch> how were they planning on doing it now?
[17:20] <hatch> ping a configuration server?
[17:20] <bac> hatch: it works now on staging
[17:20] <rick_h_> no, but I mean if you create a bundles.yaml with 3 bundles
[17:20] <rick_h_> and defined a base set of charms/config
[17:20] <bac> hatch: no, you have three different stanza, each inheriting from a base with mods
[17:20] <rick_h_> and override it using inheritance
[17:21] <rick_h_> so you'd have name-debug, name-devel, name-prod bundles
[17:21] <rick_h_> and pick the one you want to deploy
[17:21] <hatch> hmm I didn't know that was possible
[17:21] <hatch> to have a 'base' dundle
[17:21] <rick_h_> the bundles.yaml file can contain several bundles in there. Just have to name them differently
[17:21] <bac> any bundle can inherit from any other in a deployer config file  (basket)
[17:22] <hatch> ohh cool, what's the syntax for that?
[17:22] <bac> rick_h_: is that why it is bundles.yaml and bundle.yaml?
[17:22] <bac> inherits: other
[17:22] <rick_h_> bac: I *guess*?
[17:22] <rick_h_> oh, no idea. I thought it was always bundles.yaml
[17:22] <rick_h_> when it is bundle.yaml?
[17:22] <bac> no,never
[17:22] <jcastro> hatch, wouldn't you deploy the same bundle to 3 environments? devel/debug/prod?
[17:22] <bac> i'm just saying that's why that name was picked
[17:22] <rick_h_> jcastro: but in debug there'd only be one unit, and different config
[17:23] <jcastro> oh I see
[17:23] <rick_h_> jcastro: bug in devel it's scale out, maybe use a cache layer not in debug
[17:23] <jcastro> it would be neat to arbritrarily pass along config and units to parts of a bundle
[17:23] <jcastro> "deploy this bundle but only one of each"
[17:23] <rick_h_> right, there's a pre-deployment config story that's on the radar currently
[17:23] <rick_h_> but it's done on each bundle deploy and not part of the file itself
[17:24] <rick_h_> so not quite the same use case, but should be handy
[17:24] <jcastro> but at the same time, forking/cloning a bundle is cheap
[17:24] <jcastro> and they don't contain _too_ much logic
[17:24] <rick_h_> right, but you'd want them in the same file
[17:24] <rick_h_> fix a config bug and find all the forks fml
[17:24] <jcastro> so like myapp-prod, myapp-dev and -debug isn't too bad
[17:24] <rick_h_> right
[17:24] <jcastro> oh right
[17:24] <jcastro> yeah that does sound nice
[17:25] <hatch> bac can you link me to the bundles.yaml file which has this inherit feature?
[17:25] <bac> sure
[17:26] <bac> hatch: https://code.launchpad.net/~bac/charms/bundles/complicated/bundle
[17:26] <hatch> bac oh cool thanks I'll pass this on
[17:27] <jcastro> man, that is brutal
[17:27] <hatch> lol
[17:27] <bac> rick_h_: not to be pushy, but are you going to be able to get to my review soonish?
[17:27] <bac> jcastro: it is a cleaned up version from kapil
[17:27] <rick_h_> bac: looking at it now
[17:27] <bac> the "if you can ingest this, then ingest works" bundle
[17:27] <jcastro> I did not know about overrides
[17:27] <rick_h_> bac: yep, in progress
[17:27] <bac> ty
[17:35] <rick_h_> bac: feedback in, qa'ing now
[17:35] <bac> thanks
[17:38] <rick_h_> bac: qa-ok
[17:38] <bac> rick_h_: cool.  looking at your review now.  qa started
[17:39] <bac> jujugui: in case i forget tomorrow, i will not be able to make the noon meeting.  apologies.
[17:39] <rick_h_> bac: ack, thanks for the heads up
[17:39] <hatch> we have a noon meeting? ;)
[17:39] <bac> sure we do
[17:39]  * rick_h_ didn't realize noon meeting and looks at the calendar
[17:39] <bac> for exactly one of us
[17:39] <rick_h_> lol
[17:39] <rick_h_> you're ahead of eastern?
[17:39] <bac> would you people please get on AST
[17:40] <rick_h_> for some reason I thought you were in eastern
[17:40] <rick_h_> hah
[17:40] <bac> rick_h_: no daylight savings
[17:40] <rick_h_> bac: ah
[17:40] <bac> so US/East half teh time
[17:40] <rick_h_> that's right, we talked about that
[17:40] <rick_h_> you're just trying to be complicated
[17:40] <bac> gah, i wish we'd all ditch DST
[17:40] <rick_h_> +1
[17:41] <bac> and the goofballs here are now talking about starting up with DST!  geez, one of the best things they do they want to mess up
[17:48] <hatch> oh bisect...you rule
[17:48] <rick_h_> hatch: hah! awesome!
[17:48] <hatch> unfortunately it was me that caused the bug
[17:48] <rick_h_> yea, there are days when I go "All this moving to git stuff is paying off."
[17:48] <hatch> so I'm conflicted
[17:48] <rick_h_> double hah!
[17:48] <hatch> lol
[17:49] <rick_h_> try to tell me you need to break my feature will you
[17:49] <rick_h_> I'll shove that crap right back at you :P
[17:49] <rick_h_> j/k and all that, but glad to see we hopefully don't need to break things
[17:49] <hatch> oh sweet, I didn't CAUSE the problem, I exposed it
[17:49] <hatch> lol
[17:49] <rick_h_> hah, off the hook
[17:49] <rick_h_> kinhda
[17:49] <rick_h_> kinda
[17:49] <hatch> lol yeah kinds
[17:50] <rick_h_> "I didn't break it, I just proved it was broken"
[17:50] <hatch> haha, in math that would be the time to write a paper
[17:50] <rick_h_> in code it's time to write a pull request, same diff
[17:50] <rick_h_> :)
[17:53] <hatch> haha
[17:55] <bac> rick_h_: our dependencies branch will still be on launchpad, right?
[17:56] <hatch> rick_h_ ok another fix....enable the simulator so the next delta comes in :P
[17:56] <hatch> no? no? lol
[17:57] <bac> rick_h_: i think deploy.py setup_repository might need some fixing
[17:58] <rick_h_> bac: yea bzr is still installed
[17:58] <rick_h_> bac: looking, never seen/used that
[17:59] <rick_h_> bac: I updated the functional tests and they pass 20-functional
[17:59] <rick_h_> looking into deploy.py
[17:59] <rick_h_> but all tests pass currently
[17:59] <rick_h_> including the ec2 live functional ones
[18:02] <rick_h_> bac: ok, I've updated the rsync in there to ignore .git as well which is the only thing I can see to fix there. 
[18:02] <bac> rick_h_: my 'make deploy' has been stuck at DEBUG:root:waiting for the unit to be ready
[18:02] <bac> for a long time.  i haven't run this lately.  any idea how long it should take?
[18:02] <rick_h_> pushing a new -cr
[18:02] <rick_h_> bac: 10min?
[18:02] <rick_h_> bac: can you juju status and see if something broken?
[18:02] <bac> eek
[18:03] <bac> gui is pending but all else looks fine
[18:03] <rick_h_> bac: ok cool. You can log in and check the unit log if you think there's an issue but it should get to an error if it fails
[18:04] <rick_h_> so sounds like it just needs more time
[18:04] <bac> it's been 30 minutes....
[18:05] <rick_h_> oh, then check out the unit log please
[18:05] <hatch> ugh test tracebacks that are only on chai.js
[18:05] <rick_h_> juju ssh juju-gui/0
[18:05] <rick_h_> and then tail -f -n 100 /var/log/juju/unit<tab complete>
[18:05] <rick_h_> bac: ^
[18:05] <bac> ok
[18:06] <rick_h_> hatch: web components! and the font thing seems cool http://blog.chromium.org/2014/01/chrome-33-beta-custom-elements-web.html
[18:10] <hatch> will look in a bit
[18:11] <bac> rick_h_: can't do that yet.  no public address assigned at this point
[18:11] <rick_h_> bac: you can't ssh to the machine? are you using ec2 or something else?
[18:11] <bac> local
[18:11] <bac> you can't ssh until it gets an address
[18:12] <rick_h_> bac: oh it doesn't work on local I didn't think because it requires sudo
[18:12] <rick_h_> or can you? /me didn't try I guess
[18:12] <bac> rick_h_: /home/bac/charms/precise/juju-gui> sudo juju ssh juju-gui/0
[18:12] <bac> ERROR unit "juju-gui/0" has no public address
[18:12] <rick_h_> right, I think quickstart works on local but make deploy doesn't
[18:13] <bac> rick_h_: i'll restart with ec2
[18:13] <rick_h_> bac: thanks, sorry for not specifying. It should work on any of the public clouds hp, ec2, azure
[18:13] <bac> rick_h_: i'm just frugal.  and i thought it'd be faster
[18:13] <frankban> rick_h_: are you sure? IIRC make deploy shoudl work correctly on LXC
[18:14] <frankban> rick_h_: make test/ftest does not
[18:14] <rick_h_> frankban: I thought I hit an issue and you said it wouldn't work because it required sudo?
[18:14] <rick_h_> oh, maybe that was the functional tests I'm thinking of 
[18:14] <rick_h_> bah, ok. /me goes to try local lxc then with make deploy
[18:14] <frankban> rick_h_: yes, "make deploy" sould not require to bootstrap the environment, and that's the only operation requiring sudo
[18:16] <rick_h_> frankban: k, testing local out now
[18:16] <frankban> bac, rick_h_ : when using lxc, you can find info in ~/.juju/local/log (or similar) ssh is often not required
[18:17] <rick_h_> frankban: ah, I've got trusty issues with local lxc and this 
[18:17] <rick_h_>     agent-state-info: '(error: symlink /var/lib/lxc/rharding-local-machine-1/config
[18:17] <rick_h_>       /etc/lxc/auto/rharding-local-machine-1.conf: no such file or directory)'
[18:17] <frankban> rick_h_: trying make deploy on trunk
[18:18] <rick_h_> agent-state-info: '(error: container "rharding-local-machine-1" is already created)'
[18:18] <rick_h_> bah, see, issues. /me goes to tear that down manually
[18:20] <rick_h_> frankban: same thing     agent-state-info: '(error: symlink /var/lib/lxc/rharding-local-machine-1/config /etc/lxc/auto/rharding-local-machine-1.conf: no such file or directory)'
[18:20] <rick_h_> on trunk
[18:22] <frankban> rick_h_: you can try lxc-destroying your containers, and then manually removing juju related stuff in /var/lib/lxc/
[18:22] <frankban> rick_h_: and in /etc/lxc/auto/
[18:22] <rick_h_> frankban: yea, its clean. It's that to write to /var/lib/lxc it needs root perms
[18:23] <frankban> rick_h_: I think that's ok
[18:23] <rick_h_> I need to try to set the alt path. It's not writing to .juju/local :/
[18:23] <rick_h_> because that machine it starts is ending up in /var/lib/lxc I think
[18:25] <frankban> rick_h_, bac: make deploy just worked here with trunk + lxc, and now I need to go, have a nice evening
[18:25] <rick_h_> frankban: cool, good to know it's just me/trusty
[18:25] <bac> thanks frankban
[18:25] <bac> rick_h_:  and me
[18:25] <rick_h_> frankban: will try on my laptop I think to see if my branch is causing bac grief
[18:26] <rick_h_> bac: worked on trunk? 
[18:26] <frankban> cool
[18:26] <rick_h_> bac: not sure what the 'and me' is to?
[18:26] <bac> rick_h_: it isn't working for me
[18:26] <rick_h_> right, ok
[18:26] <rick_h_> bac: so make deploy on ec2 is or is not working either?
[18:27] <bac> rick_h_: yes, it has come up on ec2
[18:27] <rick_h_> ok cool, I'll test lxc on my laptop with trunk and my branch and see if I can dupe any local issues or not
[18:28] <hatch> rick_h_ https://github.com/hatched/juju-gui/commit/527027fbbda1fb712e62e8b8e795cbee84587700 this fixes the issue, I can't find any issues in qa but looking for input on any potential problems I may have overlooked 
[18:29] <rick_h_> hatch: looking at that how are we sure that doesn't introduce another level of dispatch on 'working' cases?
[18:29] <rick_h_> hatch: I'd expect it to set/check something before and after navigate to see if it fired or not?
[18:30] <rick_h_> and if not, then check hash and force a dispatch
[18:31] <hatch> that path is only hit when the user logs in and if there is a hash in the url so it shouldn't get hit during any other case
[18:33] <rick_h_> onLogin is only triggered once per visit to the site pinky swear?
[18:34] <hatch> it's triggered when the env fires a login event
[18:34] <hatch> this.env.after('login', this.onLogin, this);
[18:34] <hatch> so it should only happen once :)
[18:34] <rick_h_> ummm, ok then. if you say so
[18:34]  * rick_h_ doesn't trust anything happening only when it's supposed to
[18:35] <hatch> haha, I'm going to try it on lxc right away
[18:35] <hatch> did you push your charm update up?
[18:35] <hatch> the pull-from-git one
[18:36] <rick_h_> yea, it's pushed. lp:~rharding/charms/precise/juju-gui/git-ify
[18:36] <rick_h_> bah, my laptop has juju .07
[18:36] <hatch> wow that's an old one
[18:37] <rick_h_> yep
[18:37] <rick_h_> go raring go
[18:43] <hatch> :/ my juju env is corrupted or something
[18:44] <rick_h_> hah! at least I'm not the only one having issues with it 
[18:44] <hatch> RROR destroying environment: remove /etc/lxc/auto/hatch-local-machine-1.conf: no such file or directory
[18:44] <hatch> but it thinks it's running
[18:44] <hatch> any ideas on how to get around that?
[18:44] <rick_h_> nope, kill everything. Destroy the environment. Re-bootstrap?
[18:45] <hatch> it is killed
[18:45] <rick_h_> make sure all the old machines are gone and removed from lxc
[18:45] <hatch> it just thinks it's up
[18:45] <rick_h_> sudo lxc-ls
[18:45] <hatch> nothing
[18:45] <hatch> no machines
[18:45] <hatch> juju status shows them though
[18:46] <rick_h_> wipe the .juju/environments/xxxx?
[18:46] <rick_h_> I'm not sure there
[18:47] <hatch> I'll ask in #juju
[18:51] <bac> rick_h_: where do i find the hash for 'network-prototype', your second QA step?
[18:52] <rick_h_> bac: that is the name of a branch
[18:52] <bac> okey doke
[18:52] <bac> how do i verify version.js is correct?
[18:52] <rick_h_> bac: oh right, so to check the version in github go to that branch in the drop dow
[18:52] <rick_h_> https://github.com/juju/juju-gui/tree/network-prototype
[18:52] <rick_h_> and click on commits
[18:52] <rick_h_> to see the list and their hashes
[18:52] <bac> rick_h_: thanks, i didn't see the drop down
[18:53] <rick_h_> notice in the upper left next to the "juju-gui/+" is a drop down for branches available
[18:53] <rick_h_> cool
[18:54] <rick_h_> bac: so I got make deploy to work with local lxc. It took a bit and I tracked it was running by cat .juju/local/logs/unit.....
[18:54] <bac> rick_h_: cool
[18:57] <hatch> just plz dont merge network-prototype into develop....it will break the world
[18:57] <hatch> ;)
[18:57] <rick_h_> hatch: we're not, it's just the qa case of can you set that branch in the charm 
[18:58] <hatch> cool
[18:58] <hatch> some guy kept squatting on my username until one of the others in London told me about nickserv's `enforce` option
[18:58] <hatch> now I don't have to ghost anymore
[18:58] <rick_h_> woot
[18:59] <rick_h_> I got all excited we had random github forkers 
[18:59] <rick_h_> and then saw they worked for cisco in their user bios 
[18:59] <rick_h_> doh!
[18:59] <hatch> haha, well they aren't RANDOM but they hopefully will be contributing
[18:59] <bac> crud, no vanguard in #webops
[19:00] <hatch> ugh why can't I `juju deploy  lp:~rharding/charms/precise/juju-gui/git-ify` LIKE FOR SERIOUS!!!!
[19:00] <rick_h_> because it's not in the story, non-trunk branches aren't deployed
[19:00] <rick_h_> or ingested that is
[19:00] <hatch> I don't care what the excuse is...it should work :P
[19:00] <hatch> I'm providing the path where everything it needs is contained
[19:01] <bac> rick_h_: sorry for the slow review/qa.  done now
[19:01] <rick_h_> bac: not a problem, I know it's been a slow process getting things working. Yay charm dev
[19:02] <hatch> oh well gues I'll just have to pull your branch down and do it local
[19:02] <rick_h_> hatch: yep
[19:02] <rick_h_> hatch: make sure to make sysdeps
[19:02] <hatch> well I'm just gona deploy it as a local charm
[19:02] <rick_h_> hmm, ok
[19:03] <bac> jujugui: anyone have a USB 3 hub that they like?  i've been through two and can't get one that reliably works
[19:03] <hatch> you don't like that approach?
[19:03] <rick_h_> hatch: no, just not tried it
[19:04] <rick_h_> I think make deploy is supposed to be faster because it skips some step, but can't recall which
[19:04] <hatch> the upload to juju
[19:04] <hatch> I htink
[19:04] <rick_h_> bac: I've got http://www.amazon.com/gp/product/B009Z9M3DY/ref=wms_ohs_product?ie=UTF8&psc=1 and it's been working for me
[19:05] <rick_h_> but there was a post about some issues with anker usb3 stuff so not sure if I can recommend
[19:05] <bac> rick_h_: i have the cousin http://www.amazon.com/gp/product/B009Z9M3DY/ref=wms_ohs_product?ie=UTF8&psc=1
[19:05] <bac> western digital externally powered hard drive frequently won't mount
[19:06] <bac> anker did contact me and send a replacement after i put a bad review on AMZN.  second one is only marginally better
[19:06] <rick_h_> yea, I'm trying to find the blog post
[19:06] <rick_h_> there's something about certain chipset versions or something
[19:06] <rick_h_> but I only have keyboard, power cable, and such on it. no drives
[19:06] <bac> did a firmware update on the WD and that seemed to help but not consistently
[19:07] <bac> an, so you're not really pushing USB 3
[19:07] <bac> s/an/ah
[19:07] <bac> s/an/ah/
[19:07] <rick_h_> yea, once in a while plug in my usb3 thumbdrive
[19:07] <rick_h_> but that's rare
[19:07] <bac> .
[19:08] <bac> ohh, i didn't think to try my thumbdrive
[19:08] <hatch> I want this....but jeesh $$ http://www.belkin.com/us/p/P-F4U055/
[19:08] <bac> yeah
[19:08] <rick_h_> thanks for the review bac 
[19:09]  * bac waits for 4K apple cinema display with thunderbolt
[19:09] <hatch> hah
[19:09] <bac> because that'll be cheap
[19:09] <hatch> it'll be like $5000 lol
[19:09] <hatch> holy smokes this bzr branch is taking FOREVER!!
[19:11] <hatch> I want the belkin thing so that I only have to plug/unplug a single cable when 'docking' this thing
[19:11] <hatch> but at $300 it's gona have to wait :)
[19:11] <bac> yeah, that'd be nice.
[19:11] <bac> hatch: if i had an external monitor i'd do it
[19:11] <rick_h_> why I like thinkpad and docks
[19:12] <rick_h_> drop in dock, go
[19:12] <hatch> I have two but they are both display port which I don't think allows for chaining
[19:12] <hatch> so the $300 part is probably only one piece in the puzzle
[19:13] <bac> so the rule is now, no deploys after noon us/east on thursday...effectively
[19:13] <hatch> sounds like a plan
[19:13] <rick_h_> :/
[19:13] <bac> tuesday is a good day to deploy.  i'll shoot for tuesday.
[19:13] <hatch> 163831kB and it's still going
[19:13] <hatch> how big is your branch rick_h_  lol
[19:13] <rick_h_> hatch: get better internets :P
[19:14] <hatch> can't I branch a single revno somehow?
[19:14] <bac> hatch: i think you may have your repo malconfigured
[19:14] <bac> using shared repos?
[19:14] <hatch> I'm just pulling down a single branch, I don't realy want to set up shared repos on this box
[19:15] <bac> hatch: then you don't really mind waiting
[19:15] <bac> i mean, it can grab the common info from a local shared repo
[19:15] <bac> or you can download it again each and every time
[19:15] <hatch> kind of a shortcoming of bzr hey that I can't pull down a single revno
[19:15] <rick_h_> sure you can
[19:16] <hatch> really? it sure seems to be pulling down the entire repo still
[19:17] <rick_h_> yea, there's flags to do a shallow clone
[19:17] <jcastro> https://code.launchpad.net/~james-page/charms/bundles/openstack-on-openstack/bundle
[19:17] <jcastro> any idea why this isn't showing up on jujucharms.com?
[19:17] <rick_h_> jcastro: the fixes for supporting inheritance and self-referring relations are in progress
[19:18] <rick_h_> we can't deploy the one fix right now and the other will be fixed monday
[19:18] <jcastro> oh ok, so it'll just show up at one point
[19:18] <bac> jcastro: we've made a request to deploy
[19:18] <jcastro> gotcha
[19:18] <rick_h_> jcastro: rgr
[19:18] <rick_h_> jcastro: that found two bugs in our proofing stuff
[19:18] <rick_h_> jcastro: fixes in progress
[19:19] <rick_h_> hatch: bzr co --lightweight lp:~juju-gui-charmers/juju-gui/charm-download-cache $DOWNLOADCACHE
[19:19] <rick_h_> is an example
[19:20] <jcastro> rick_h_, ok so I'll just say it'll show up over the next few days or so
[19:20] <rick_h_> jcastro: rgr
[19:20] <jcastro> manually doing the bundle by hand seems to have worked
[19:20] <jcastro> in the mock environment anyway. :p
[19:21] <rick_h_> yea, kapil had some as well that worked but we had issues with them ingesting due to proof thinking they were invalid
[19:45] <bac> jujugui, jcastro: manage.jujucharms.com did get updated
[19:45] <bac> so inherited charms should work now or when they get ingested
[19:45] <bac> self-referential ones will have to wait until next week, jcastro
[19:47] <rick_h_> jcastro: bac so I downloaded and ran proof on it
[19:47] <rick_h_> E: openstack: The requested relation nova-ceilometer to nova-ceilometer is incompatible between services.
[19:47] <rick_h_> is the error
[19:47] <rick_h_> that's the one benji is working on and will be updated next week
[19:48] <bac> yes, the so-called 'self-referential'
[19:50] <bac> jujugui: i'm going to duck out.  dog to walk.  festivities to shoot (camera) while avoiding being shot (glock).
[19:50] <rick_h_> bac: good luck with the shooting
[19:50] <rick_h_> on both ends of it
[19:51] <bac> through in some civil rights violations, unconstitutional search and seizures, pig on a stick and it's a party.
[19:51] <jcastro> rick_h_, ok so here's a weird one.
[19:51] <rick_h_> ruh roh
[19:51]  * rick_h_ ducks
[19:51] <jcastro> rick_h_, do a search for "elastic search juju"
[19:51] <jcastro> manage.jujucharms.com/~charming-devs/precise/elasticsearch‎
[19:52] <jcastro> whatever ~charming-devs is, it's URL comes up before the canonical one
[19:54] <rick_h_> hmmm, well...not sure on that one
[19:54] <rick_h_> I mean I can make crap up
[19:55] <jcastro> yeah I had never even heard of ~charming-devs
[19:55] <rick_h_> "google knows that juju and the word 'charm' goes together and in this case the url has more charm in it due to the ~charming-devs in it
[19:55] <rick_h_> jcastro: it's our ES charm that we package it up for charmworld/IS. Notice sinzui is the only one with commits on it
[19:55] <rick_h_> it's purely to get into IS for charmworld
[19:56] <rick_h_> jcastro: hmm, http://manage.jujucharms.com/precise/elasticsearch/hooks/install
[19:56] <rick_h_> might be causing the page to lose points on google-fu
[19:57] <rick_h_> jcastro: maybe file a charmworld bug on the 404's 
[19:57] <rick_h_> the other charm doesn't have the 404s and seems better quality
[20:45] <hatch> rick_h_ hey you around?
[20:45] <rick_h_> hatch: yea
[20:46] <hatch> so I haven't been able to deploy my branch using your charm
[20:46] <rick_h_> k, got a sec to hangout and I can walk you through getting debug info?
[20:46] <hatch> in a bit, dogs are playing lol
[20:46] <rick_h_> rgr
[20:47] <hatch> oh I think I screwed it up
[20:51] <hatch> rick_h_ is there a way I can trigger a config changed hook to run?
[20:51] <rick_h_> hatch: change hte config?
[20:51] <rick_h_> hatch: set it back to 'develop'
[20:51] <rick_h_> hatch: so you have to resolve it if it's in error first
[20:51] <rick_h_> then set it to develop
[20:52] <rick_h_> then set it to "https://.....you...repo your_branch"
[20:52] <rick_h_> watch the unit log in /var/log/juju/unit..... for what's going on
[20:52] <hatch> 2014-01-16 20:50:30 INFO juju.worker.uniter context.go:323 HOOK ValueError: u'git@github.com:hatched/juju-gui.git render-app-hash': release not found
[20:53] <hatch> no matter what I try that's the error I get
[20:53] <hatch> 'release not found'
[20:53] <rick_h_> don't use the git addres
[20:53] <rick_h_> only https
[20:53] <rick_h_> git needs ssh keys
[20:53] <rick_h_> never use those for tools/etc outside of your own work
[20:54] <rick_h_> https:// note that in all the examples/etc
[20:55] <hatch> oh there are examples?
[20:55] <hatch> lol
[20:56] <rick_h_> yea in the docs 
[20:57] <rick_h_> well actually in the config.yaml there's notes on possibly values
[20:59] <hatch> blarg no luck
[20:59] <rick_h_> log?
[21:00] <hatch> looking
[21:00] <hatch> apparently it barfed on 'develop' as well
[21:01] <rick_h_> on 'just' develop?
[21:01] <rick_h_> the string?
[21:01] <hatch> yup
[21:01] <rick_h_> then I need to see the logs/etc because it's passed qa and whiel I've got a stupid functional test issue keeping me from landing it works
[21:01] <hatch> it appears to be working on the https though....
[21:02] <hatch> 2014-01-16 21:00:41 INFO juju.worker.uniter context.go:323 HOOK ValueError: u'https://github.com/hatched/juju-gui.git render-app-hash': release not found
[21:02] <hatch> 2014-01-16 20:58:30 INFO juju.worker.uniter context.go:323 HOOK ValueError: u'develop': release not found
[21:03] <rick_h_> wtf it shouldn't be trying to find a release.
[21:03] <rick_h_> what's your juju set command?
[21:04] <hatch> is there a way I can check where the charm is from?
[21:04] <rick_h_> not following
[21:04] <hatch> charm: cs:precise/juju-gui-81 is what it says
[21:05] <hatch> is that the version of your charm?
[21:05] <hatch> 81
[21:05] <rick_h_> nope, not sure where 81 comes from
[21:05] <rick_h_> version is 102
[21:05] <rick_h_> bzr rev is 150s
[21:05] <hatch> hmm wth
[21:06] <hatch> yeah revno is 159
[21:06] <rick_h_> so sounds like the charm source is out of date
[21:08] <hatch> well that's the revno from the folder `juju deploy --repository=/home/hatch/precise juju-gui`
[21:08] <rick_h_> I'm not sure
[21:08] <rick_h_> juju ssh juju-gui/0
[21:08] <rick_h_> then sudo update db
[21:08] <hatch> in the charm what should I look for?
[21:08] <rick_h_> and locate utils.py
[21:09] <rick_h_> in that should see some git stuff
[21:09] <hatch> oo lots of those
[21:10] <rick_h_> git references or utils.py?
[21:10] <rick_h_> locate utils.py | grep juju-gui
[21:10] <hatch> none
[21:11] <rick_h_> locate utils.py | grep juju 
[21:11] <hatch> none
[21:11] <rick_h_> wtf
[21:11] <rick_h_> locate HACKING.md
[21:11] <hatch> none
[21:11] <hatch> maybe the db is broken
[21:11] <hatch> any idea the real location for it?
[21:11] <rick_h_> oh oh
[21:11] <rick_h_> sudo locate HACKING.md
[21:11] <rick_h_> forgot the charm is owned by the non-ubuntu user
[21:11] <hatch> there we go
[21:12] <hatch> ohh...thats odd
[21:12] <hatch> isnt it?
[21:12] <rick_h_> so go there and look for the utils.py
[21:12] <rick_h_> and look for git in that file
[21:13] <hatch> ok there is no git in this file
[21:13] <hatch> wtf
[21:14] <rick_h_> then you've got somem other source
[21:14] <rick_h_> some
[21:14] <rick_h_> and that explains everything
[21:14] <hatch> which is impossible because of how it was deployed
[21:14] <hatch> maybe I came across a bug?
[21:15] <hatch> ohhh right
[21:16] <hatch> ok so this CLI needs some help lol because it totally should have thrown an error instead of just working silently
[21:16] <rick_h_> 'this cli'
[21:16] <rick_h_> ?
[21:16] <hatch> uju
[21:16] <hatch> juju
[21:16] <rick_h_> the unit should be in an error state?
[21:16] <hatch> no it never should have deployed
[21:16] <rick_h_> because the hook should have failed
[21:16] <rick_h_> why not? it picked up something that worked. I'm not sure what you did or how you did it
[21:16] <hatch> it deployed from charmstore even though I specified a repository
[21:17] <rick_h_> did you maybe get trunk vs my branch?
[21:17] <hatch> yup
[21:17] <rick_h_> is your local repository trunk vs my branch?
[21:17] <hatch> my local repo is your branch, with git in the utils etc etc
[21:17] <rick_h_> ok
[21:17] <hatch> but because I didn't type local:precise/juju-gui it disregarded the repository flag
[21:17] <rick_h_> then like I said, I'd sure just run 'make deploy' if I were you :)
[21:18] <rick_h_> if you wanted to save time, edit the config.yaml and make your url + branch the default value
[21:18] <hatch> I don't think I know the process to pull down a charm into an instance
[21:18] <rick_h_> kill it
[21:19] <hatch> oh well I can start over haha
[21:19] <hatch> I thought you meant to pull down the charm into the instance
[21:19] <rick_h_> nope
[21:19] <rick_h_> I mean change the config.yaml in your checkout, run make deploy, enjoy
[21:20] <hatch> that could be on a tshirt
[21:20] <hatch> make, deploy, enjoy
[21:21] <hatch> I wish there was a way to list the config options and their values without the descriptions
[21:23] <hatch> *sigh* now juju is throwing errors again
[21:23] <rick_h_> wheee
[21:23] <hatch> this version is sure buggy
[21:26] <hatch> I of course am probably doing things which are not 'normal' lol
[21:27] <hatch> there we go
[21:27] <hatch> charm version 102 being deployed
[21:28] <rick_h_> hatch: https://plus.google.com/104537541227697934010/posts/Qj8R5SWAsfE
[21:29] <hatch> it's picking up steam 
[21:29] <hatch> I still think the template should be loaded in from another file
[21:31] <hatch> I THINK I saw a video where one of the devs was going to say that they are putting together some functionality like that
[21:31] <hatch> but maybe I'm just making that up
[21:36] <rick_h_> hatch: ok, just pushed the charm changes to ~juju-gui trunk version
[21:36] <rick_h_> hatch: so soon it'll be in the store hopefully
[21:37] <rick_h_> just not the reviewed/released one, but the ~juju-gui one
[21:37] <hatch> ahh well it's ok now - I think I also figured out my issue with the corrupt juju instance
[21:37] <rick_h_> okie dokie
[21:37] <rick_h_> I'm giong to head out. If you need a hand let me know and I'll try to check in later.
[21:37] <hatch> if you go `juju destroy-environment local` then `sudo juju destroy-environment local` then it throws the error
[21:38] <hatch> cool, have a goood one
[21:38] <rick_h_> you too
[22:02] <huwshimi> Morning
[22:13] <hatch> morning huwshimi, sorry I haven't had a bunch of time to look at your branch again
[22:13] <hatch> but the reason your event isn't being fired is because your callback is in the wrong context
[22:15] <hatch> I just added the comment
[22:15] <huwshimi> hatch: Ah great. No problems at all
[22:19] <huwshimi> hatch: But will that event get fired if there is no CSS animation in the test?
[22:19] <hatch> umm looking
[22:22] <hatch> so the issue is that we aren't loading the css hey
[22:22] <hatch> hmm
[22:23] <huwshimi> Yeah
[22:24] <huwshimi> hatch: Even adding  style="transition: left 0.1;" to the element doesn't fix it.
[22:25] <huwshimi> hatch: Unless it needs to be vendor prefixed
[22:25] <hatch> does the event fire when it's not in a test?
[22:27] <huwshimi> hatch: Yeah, it has been working fine (even without the 'this' from your comment)
[22:28] <hatch> ok right, the 'this' comment was only to fire that event
[22:29] <hatch> ok so what's happening is that the event is not being fired in phantomjs
[22:29] <hatch> so what you want to do is know when the new tab is visible right?
[22:31] <huwshimi> Yep
[22:31] <hatch> huwshimi looks like phantom does not support transitionend
[22:32] <huwshimi> When it has animated into place
[22:32] <huwshimi> ah right
[22:33] <hatch> SO....
[22:33] <huwshimi> hatch: And I believe you can't simulate arbitrary events with YUI right?
[22:33] <hatch> you could try firing an event on the node
[22:33] <hatch> myNode.fire('transitionend')
[22:34] <hatch> but I have no idea if that's going to work 
[22:37] <huwshimi> nope
[22:38] <huwshimi> I also have problems with the selectionChange event not firing, but that's probably my fault somehow
[22:41] <hatch> hmm that looks ok but I haven't pulled down the code to really take a look
[22:41] <hatch> been stuck trying to fix a bug all day
[22:42] <huwshimi> hatch: What are you working on?
[22:42] <hatch> well I just finished qa'ing and now I'm writing the last of the tests but it's the bug where it wouldn't dispatch if there is a hash in the url on prod
[22:43] <huwshimi> oh, fun
[22:44] <hatch> yeah it was a simple small fix but tracking it down and testing it was very time consuming
[22:54] <hatch> I dropped my phone on my laptop and scratched it
[22:54] <hatch> these things are sure fragile lol
[22:58] <huwshimi> heh
[23:13] <hatch> jujugui looking for a review/qa https://github.com/juju/juju-gui/pull/75 (requires qa in real env)
[23:20] <rick_h_> hatch: I'll look first thing in the morning 
[23:20] <hatch> coolio, qa'ing takes a while unfortunately
[23:21] <rick_h_> yea, I'm between things so should have time