[01:36] <hatch> good evening
[01:58] <rick_h_> party
[01:58] <rick_h_> how goes the mountain hatch ?
[01:58] <hatch> oh I never went to the mountain
[01:58] <hatch> I went to the lake instead, but I'm back now
[01:59] <hatch> plans for the mountain fell through :/
[01:59] <rick_h_> oh, thought I heard something about boarding
[01:59] <rick_h_> ah, gotcha
[01:59] <rick_h_> welcome home
[01:59] <hatch> thanks, I've been to the mountains 3 times this year already so I figure it's not so bad that the 4th was canceled :)
[02:00] <rick_h_> good stuff, took the fam out to the local little ski hill this week
[02:00] <rick_h_> think I'm going to get some skis here next month when they have end of year sales
[02:00] <rick_h_> try to get some lessons next year
[02:00] <rick_h_> or the way this winter has gone, look into cross country sking
[02:01] <hatch> ahh awesome, yeah we have been thinking of getting into that too, there are a bunch of trails around here, and...well anything to help me get less fat
[02:01] <hatch> lol
[02:01] <rick_h_> hah
[02:03] <hatch> I see you're on the debug log email train
[02:03] <hatch> so I'll stay out of that one :)
[02:03] <rick_h_> yea
[03:01] <lazyPower> Do you guys know where i can find the recipes for the quickstart images? I'm looking specifically for the preseeds or cloud init scripts we're using to get juju installed on the base image.
[11:47] <marcoceppi> rick_h_: it definitely happened, it just too a while to happen (re: migration to openstack-charmers)
[11:50] <rick_h_> marcoceppi: oh yea? 
[11:51] <marcoceppi> yeah, queue is down to only 3k this morning too
[11:51] <rick_h_> lazyPower: quickstart images?
[11:51] <rick_h_> marcoceppi: hmm, I show 37k
[11:51] <rick_h_> baskets (bundles) is 3k
[11:51]  * marcoceppi rubs crap out of eyes
[11:51] <marcoceppi> oh yeah, there is an extra didget in there
[11:51] <rick_h_> I got logs from webopsn so we'll see today. but yea it does look like it goes down some
[11:52] <rick_h_> so glad it's processing for you somewhat, just on major 37k delay
[12:05] <rick_h_> luca__: there's both a monday and friday catch up calls?
[12:06] <luca__> rick_h_: yes because stuff might happen over the weekend... :P
[12:06] <luca__> rick_h_: I've just put them in as place holders
[12:06] <rick_h_> luca__: ok
[12:06] <rick_h_> just checking :)
[12:07] <luca__> rick_h_: I set a call up today to discuss stuff
[12:07] <rick_h_> luca__: ok
[12:07] <luca__> rick_h_: I had a meeting with Mark this morning at 9am, he signed off Machine View :D
[12:07] <rick_h_> woot!
[12:08] <rick_h_> go go go before he changes his mind :)
[12:08] <luca__> haha
[12:08] <luca__> I've already told him I'll be passing it on this week for implementation
[12:08] <luca__> it's all locked and we aren't planning to show him any more designs on i
[12:08] <luca__> it^
[12:10] <rick_h_> cool
[12:19] <rick_h_> bac: morning
[12:31] <bac> hi rick_h_
[12:32] <rick_h_> bac: charmworld's gone boom. I'm getting an email ready. I've got a morning full of calls until the standup in a bit so wanted to see if I could handoff. 
[12:33] <bac> yes, please
[12:33] <bac> manage, i assume
[12:33] <rick_h_> yea
[12:33] <rick_h_> I'll have the email out in a couple of min. logs attached
[12:33] <bac> boom in what regard?
[12:34] <bac> oh my, look at those queues!
[12:36] <rick_h_> heh, well it was over 30k this morning
[12:36] <rick_h_> email waay
[12:36] <rick_h_> away
[12:36] <rick_h_> bac: I will say that marcoceppi's stuff did eventually ingest overnight. So the queue is getting processed, but I think all these issues finding charms and empty bzr branches is causing a slow down perhaps? 
[12:36] <rick_h_> or the revision fetching code to go haywire. Not sure off the top of my head
[12:37] <bac> rick_h_: i don't know what of marco's stuff you're referring
[12:37] <rick_h_> bac: sorry, this came up because marco was submitting charms that were not ingested after 30 or so minutes last night
[12:37] <rick_h_> they did eventually, but only after going throughthe queue all night
[12:37] <rick_h_> bac: so the queue is 'working' for some definition, but it's getting waaaaay backed up
[12:38] <bac> yes, our history fetching is subpar
[12:38] <rick_h_> the logs have some interesting stuff, but didn't jump out at me what happened. Might need to check if there's been anything going on the juju store side
[12:38] <bac> we know it is not good but haven't spent the time to fix it since it will be going away.
[12:39] <rick_h_> yea
[12:39] <rick_h_> have to limp her along for another 8mo or so though
[12:41] <bac> it seems the queuing job could look into the db to see if an older revision exists and simply not queue it if it does.  by definition, old revisions are immutable so why refetch them?
[12:43] <bac> this looks bad: 2014-02-19 01:43:25 [24559] [CRITICAL] WORKER TIMEOUT (pid:2310)
[12:45] <bac> 2014-02-19 12:00:02,921 WARNI [charm.review][MainThread] lp_credentials path doesn't exist: "/home/webops_deploy/charmworld/charmbot_credentials.txt"
[12:45] <bac> deploy problem
[12:48] <rick_h_> bac: want to try to push for our new unit today then?
[12:48] <rick_h_> bac: jacekn was helping me on vanguard
[12:49] <bac> rick_h_: i want to investigate a little before we lose the current bad state
[12:49] <rick_h_> bac: k
[12:49] <bac> rick_h_: i'm also a bit unclear of the steps for the unit swap.
[12:50] <bac> if we bring up another charmworld instance, talking to the same mongo and es, won't they be clobbering one another?
[12:51] <rick_h_> bac: yes, but they should be clobbering with the same data for a short time
[12:51] <rick_h_> bac: maybe we can stop the queue in supervisor on the current running instance first
[12:51] <bac> actually we could just stop the ingest process on the existing unit first
[12:51] <rick_h_> so that it's just serving as pure read-only
[12:51] <rick_h_> right
[12:51] <bac> yep
[12:51] <rick_h_> and then bring up the new unit, flip the dns to the IP addresses on the new unit for m.j.c
[12:51] <rick_h_> and then take down the old unit after dns updates
[12:52] <rick_h_> in theory at least
[12:53] <bac> rick_h_: it could be that we have too much work to do in the time alloted.  the timer pops with items still in the queue.  when the process is restarted it just adds to the work and then we're doomed.
[12:53] <bac> upon ingest process restart we should use the --clear option to start from scratch.  though, if our period is too short we'll never finish.
[12:53] <rick_h_> bac: right, but I'd expect to see fewer errors/issues with that. The queues just back up but still go. 
[12:54] <rick_h_> bac: we can test it out and see. Can we add another worker to the queue processing side? 
[12:54] <bac> i don't see a lot of unexpected things in the logs except the CRITICAL timeout i pasted
[12:54] <rick_h_> all the charms not found and no files in the charm is normal? /me hadn't realized that had gottn that way
[12:55] <bac> rick_h_: oh, those errors may be tied to the other issue, the lp_credentials not found
[12:55] <bac> those could be private branches that cannot be read by lplib in anonymous mode
[12:56] <rick_h_> bac: ah ok
[13:15] <bac> rick_h_: jace found the credentials to be empty and was going to restore them.
[13:16] <bac> rick_h_: i propose we have him kill and restart the ingest process, which will clear the queues, and then monitor them to see what happens.  note that staging is processing just fine.
[13:18] <rick_h_> bac: rgr, sounds good thanks
[13:23] <bac> rick_h_: queues reset on m.j.c and processing at a reasonable rate.  will keep an eye on things
[13:23] <frankban> guihelp: I need one review + QA for https://codereview.appspot.com/65970043 (quickstart branch in preparation for new release). Is anyone available? thanks!
[13:23] <rick_h_> frankban: looking right now
[13:24] <frankban> thanks rick_h_ 
[13:37] <bac> rick_h_: turns out the last deploy installed an old crontab file for charmworld.  the queueing process is getting run two different ways, thus ingest can never keep up.
[13:37] <bac> rick_h_: this a direct result of RT 67228
[13:38] <rick_h_> bac: arg, yea thanks for the diagnosis. 
[13:38] <rick_h_> at least it's something we know about I guess. Time to press to get that fixed. 
[13:38] <gary_poster> fixing deploy of charmworld to make it sane was something we he've had on the list
[13:38] <rick_h_> so combo of lp creds not there and double ingesting ever cycle
[13:38] <rick_h_> yea, the RT has been in progress, just haven't gotten the fix that is noted as needing to happen...to happen
[13:39] <bac> rick_h_: yeah.  so they think creating a new unit of charmworld is the fix?
[13:39] <rick_h_> all the work on what's wrong and how to fix it has been done
[13:39] <rick_h_> bac: yes, the RT notes that after talking with juju-core that unit is fubar now 
[13:39] <rick_h_> bac: and the only fix is a fresh one
[13:39] <bac> rick_h_: so should i wait for deej and get him to make it happen?
[13:40] <rick_h_> the issue is something with how IS is using the git support in juju to track changes from the original charm to handle upgrades
[13:40] <bac> rick_h_: and will subsequent deploys need this unit replacement dance?
[13:40] <rick_h_> bac: I'd see if deej is going to be around, but if not we really need to find someone to make the fix. 
[13:40] <rick_h_> bac: I don't think so. I believe that once this damanaged unit is replaced, we're back on normal track
[13:40] <rick_h_> but I'm assuming that IS is stopping whatever put it in this bad state to start with
[13:44] <bac> rick_h_: jace is no longer vanguard.  mjc is now cowboyed to be in a happy place.  perhaps we can work with the next vanguard or deej to get the proper fix from the RT deployed.  this issue is wasting lots of our time.
[13:44] <rick_h_> bac: +1
[13:44] <rick_h_> bac: let me know if you find out if deej is around and if I need to ping someone higher up to get the fix out
[13:45] <rick_h_> around today that is
[13:45] <bac> he is not yet here.  gary_poster is deej us/east?
[13:45] <gary_poster> bac yes
[13:45] <bac> us/fredrickburg
[13:45] <gary_poster> yup
[13:45]  * rick_h_ checks directory
[13:45] <bac> cool;
[13:46] <rick_h_> bac: and not away in holiday list so give it a bit and let's see if we can track him down :)
[13:52] <rick_h_> frankban: LGTM + QA ok thanks!
[13:52] <frankban> rick_h_: great thank you
[13:59] <frankban> uhm, note to myself: remember to check the landing lane before running "lbox submit"...
[14:00] <lazyPower> rick_h_: apparently i was referring to the jujuredirect project
[14:00] <lazyPower> i found the scripts I was looking for though. Thanks for following up
[14:17] <rick_h_> lazyPower: cool
[14:27] <bac> rick_h_, gary_poster: deej is in the house and we're working on it now
[14:45] <bac> rick_h_: are you aware of what orange squad has been doing with putting dependencies in a swift/s3 bucket rather than in the charm dependencies?  deej just brought it up saying that's the new preferred way to provide static dependencies.
[14:56] <rick_h_> bac: no, I'm not aware. 
[14:56] <rick_h_> bac: I know we had an issue with our charm size being too big at one point
[14:56] <rick_h_> not sure if there was a trick to work around that we had to go through?
[14:57] <rick_h_> luca__: is there a hangout for the meeting? I don't see one in the calendar invite
[14:57] <luca__> rick_h_: I shall add one!
[14:58] <rick_h_> luca__: ty
[15:10] <marcoceppi>  aweee yeahhh it's phone time
[15:10] <marcoceppi> hah, I'm in the wrong room, but now you guys know I'm on the phone
[15:11] <rick_h_> marcoceppi: lol
[15:11] <hatch> lol
[15:12]  * hatch needs to figure out a way to speed up his lxc machine creation
[15:12] <marcoceppi> hatch: ask hazmat, he had some crazy btrfs stuff that made lxc creation lightning fast
[15:13] <hatch> that I'll do, 10m+ is too long
[15:32] <hatch> rick_h_ I created a new bug card which needs to be fixed before release for Project 1 but I can't move it to urgent...it keeps putting it in high so I'm leaving it in Project 1 for now
[15:33] <rick_h_> hatch: ok sounds good
[15:33] <rick_h_> thanks for the heads up
[15:47] <hatch> it would sure be nice if the GUI told you what 'stage' the new machine was at...
[15:48] <hatch> even if it just said 'waiting for machine' and 'running hooks' :)
[15:50] <Makyo> jujugui call in 10
[15:50] <rick_h_> hatch: heh, yea on the radar
[15:51] <hatch> rick_h_ yeah I know....I've been running on real environments for the past little while so I'm noticing all the little 'nice to have's' that we don't see when usingthe sandbox :)
[15:51] <rick_h_> hatch: yep, that's what got me thinking on it as well
[15:51] <rick_h_> during charm development when I was bringing up new machines and wanted to know when I could get at the logs
[15:51] <rick_h_> e.g. after it's brought up, juju is running on it, etc
[15:51] <gary_poster> we got push back on that before
[15:52] <rick_h_> yea
[15:54] <hatch> gary_poster haters gona hate :)
[15:55] <gary_poster> :-)
[15:58] <rick_h_> jujugui call in 2
[16:18] <hazmat> hatch, marcoceppi its on http://github.com/kapilt/juju-lxc ... its not polished yet the way the juju-digitalocean provider plugin, ie. it has setup that needs to be done manually.. but its what i use everyday.. and can do 50 containers in an env in under a minute offline... hoping to come back to it in march to polish and socialize, but if you need a 5m intro.. i'd be happy to walk through
[16:19] <rick_h_> hazmat: howdy, wanted to check for standup on reviews and commit bits for frankban on deployer?
[16:20] <hatch> hazmat thanks I'm pretty swamped today but I'm definitely interested so I'll be in touch
[16:20] <rick_h_> hazmat: we're getting releases together for MWC and would like to be able to close up the bugs for jcastro for the bundle marketing. 
[16:20] <hazmat> rick_h_, i'm switching tracks to it now.. i'd rather skep the standup, and do the work.. ie.  should be done in next 2hrs.
[16:21] <rick_h_> hazmat: rgr, thanks
[16:32] <hatch> benji so what technique did you use to get it to pass the gjslint? I've tried a few different things and no such luck
[16:36] <Makyo> jujugui anyone else receive a message thanking them for downloading a Landscape trial?
[16:36] <hatch> nope
[16:37] <gary_poster> no
[16:37] <Makyo> Huh!  Oh well.
[16:37] <hatch> you were h4x0r3d
[16:38] <rick_h_> hatch: is the scope kind of nuts?
[16:38] <hatch> rick_h_ well I even tried assigning something to it right away and it's still complaining
[16:38] <hatch> so I don't know wth is going on
[16:38] <rick_h_> hatch: push the branch up, I'll pull it down and see what it does here
[16:39] <hatch> rick_h_ ok just trying one more thing first
[16:39] <rick_h_> hatch: rgr
[16:41] <hatch> rick_h_ https://github.com/hatched/juju-gui/tree/upload-inspector-drop I even tried assigning a value to startTheApp right after it's assignment and it still thinks it's unused 
[16:41] <benji> hatch: I transformed the code from "var foo;\nfoo = bar" to "var foo=bar;"
[16:41] <hatch> benji hmm yeah I tried that too
[16:42] <hatch> sonofa
[16:42] <gary_poster> ...hippopotamus?
[16:42] <hatch> lol
[16:43] <hatch> now THAT is a new one
[16:43] <gary_poster> surprise and delight, that's what we're after.  At least one of 'em, anyway.
[16:44] <rick_h_> hatch: which file? lint passes here
[16:45] <hatch> ugh
[16:45] <hatch>  /vagrant/test/test_startup.js:25:(0133) Unused local variable: startTheApp.
[16:45] <hatch> well ok I guess I'll just leave it then
[16:46] <rick_h_> hatch: hmm, yea push it up with a pull request and see if CI dupes
[16:47] <hatch> ok well I just want to write a few more tests then will do
[16:47] <Makyo> What versions of gjslint?
[16:47] <Makyo> (forget if those are locked down)
[16:48] <frankban> juju-gui: new quickstart 1.1.0 released on the juju stable PPA and on PyPI. New features include: support juju-core 1.18, get admin-secret from juju-generated jenv file, use existing ssh-agent if possible + minor fixes to code and documentation
[16:48] <gary_poster> yay!!!
[16:48] <rick_h_> frankban: woot! you up for doing a small blog post on the jujugui blog? I'll link it from the twitter account. 
[16:49] <Makyo> rick_h_, I think it should autotweet blog posts now
[16:49] <frankban> rick_h_: will do
[16:49] <rick_h_> Makyo: oh awesome
[16:49] <rick_h_> frankban: thanks
[16:50] <rick_h_> Makyo: hatch yea, it's a python package that's in the archives so should be version locked
[16:51]  * frankban bbiab
[16:51] <hatch> ubuntu on air is starting now
[16:51] <rick_h_> hmm, wonder if that'll load on my phone
[16:52] <hatch> it should
[16:52] <hatch> it's just yewtewbe
[16:53] <hatch> http://www.youtube.com/watch?v=gGG_GHYzSLs
[16:53] <hatch> ^ rick_h_  that's the url to the youtube stream (which hasn't started yet)
[16:57] <rick_h_> hatch: yea, works in chorme beta on my phone but not chrome stable on the tablet
[16:57]  * rick_h_ is trying to watch while cooking some lunch wheee
[16:57] <hatch> ahhhh
[17:34] <hatch> blarg now I'm getting a ton of test failures surrounding the index.html file.....:/
[17:35] <rick_h_> did you break your environment hatch? :P
[17:35] <rick_h_> stale branches lead to too much fun
[17:36] <hatch> I might have....but the diffs don't show that there are any changed files which could cause this
[17:36] <hatch> argsaurce
[17:37] <rick_h_> happy to test it out if you push up again
[17:37] <rick_h_> see if I can dupe or if it's your setup
[17:37] <hatch> I think I'll push and then delete the entire branch and reclone
[17:37] <rick_h_> l
[17:37] <rick_h_> k
[17:37] <rick_h_> one of those characters
[17:38] <hatch> lol
[17:49] <hatch> ok apparently deleting that directory also made it re-build the vagrant image
[17:49] <hatch> not cool
[17:49] <hatch> hah
[17:49] <rick_h_> uh, ok
[17:49] <hatch> today is not my day apparently
[17:49] <rick_h_> heh, what is your setup so I don't dupe it? :)
[17:50] <rick_h_> is vagrant all vbox still?
[17:50] <hatch> yeah
[17:50] <rick_h_> hmm, in my research it seems the worst VM tech on osx these days
[17:50] <hatch> parallels is the bet, but their Ubuntu support is absolute shit
[17:50] <hatch> and the same goes for their support
[17:50] <hatch> it's only $100....not like you would expect ANY support for that
[17:50] <rick_h_> yea, I was going to get fusion pro which I was reading is the best
[17:51]  * hatch isn't bitter at all
[17:51] <rick_h_> :/
[17:52] <hatch> their support is all 'reply with canned responses' bs
[17:52] <hatch> it's like "thanks, I can read the FAQ too"
[17:54] <hatch> we have a lot of deps
[17:54] <hatch> haha
[17:55] <hatch> rick_h_ yeah my env must have been busted somehow. it now passes lint just fine
[17:55]  * hatch blames python
[17:56] <rick_h_> hatch: heh, glad it's playing nice now
[19:00] <hatch> cool hangouts now tells you 'your next meeting is starting now'
[19:00] <rick_h_> yea, kind of neat
[19:04] <Makyo> jujugui small review. QA is everything works w/ bundle vis and environment (no change in behavior, just ensuring unique ids) https://github.com/juju/juju-gui/pull/135
[19:05] <rick_h_> Makyo: cool, looking
[19:05] <rick_h_> Makyo: some failing tests?
[19:06] <Makyo> Boo. Worked here..
[19:06] <rick_h_> seems legit
[19:06] <Makyo> Let me dig in, may have had a .only left over
[19:06] <Makyo> Ah, yeah.  Crap.  Sorry.  1sec.
[19:13] <hatch> rick_h_ 1:1 time?
[19:14] <rick_h_> hatch: linky
[19:14] <hatch> pmd
[19:32] <Makyo> rick_h_, https://plus.google.com/hangouts/_/calendar/Z2FyeS5wb3N0ZXJAY2Fub25pY2FsLmNvbQ.ji7urs3vqfdc7f3nnbk33hh67s?authuser=1
[19:33] <rick_h_> Makyo: cool, one sec
[19:52] <bac> rick_h_: update: mjc is processing charms/bundles like a champ.  i've asked deej for an update as to the status but heard no reply
[19:53] <bac> nothing was added to the RT
[19:53] <rick_h_> bac: thanks was watching that. Yay for working but :/ for communication
[19:53] <rick_h_> bac: thanks for tracking that all day
[19:54] <bac> rick_h_: could be busy, lunching, etc.  i'll check again before eod
[19:55] <rick_h_> bac: cool thanks
[19:55] <bac> benji: at your leisure could you look at https://codereview.appspot.com/65920045/
[19:55] <bac> it is mighty small
[19:55] <benji> bac: sure
[19:55] <benji> I'll take a look now.
[19:58] <rick_h_> Makyo: +1 ty
[19:58] <benji> bac: I thought that card was about making the search strings "charms" and "bundles" (with or without the "s") return all of the charms/bundles.
[19:58] <rick_h_> benji: +1
[19:59] <bac> benji: well, yes, we *could* do that.
[19:59] <bac> benji: and due to popular demand i think i will
[20:00] <bac> benji: wait here...
[20:00]  * benji opens up a newspaper and leans agains the wall.
[20:09] <bac> benji: we should be able to do both.  our current search is broken, as there is apparent support for using 'match_all' if no search term is given to return all results.  but it does not work.
[20:10] <bac> if it worked, then 'bundles:' would return all bundles but 'bundles:mysql' would return only bundles with mysql
[20:10] <benji> bac: I would think you could just tweak what you have to do the kind search alone if there is no ":" and following string
[20:11] <bac> benji: but 'match_all' should work and be tested
[20:18] <hatch> new leankit UI coming up http://leankit.com/blog/2014/02/leankits-update-ui/
[20:24] <rick_h_> looks cool
[20:24] <rick_h_> was playing in the backlog today
[20:25] <hatch> I was looking at Trello for us
[20:25] <hatch> and I can't really see anything it does better than lk
[20:25] <hatch> at least for our typical workflow
[20:27] <rick_h_> yea the ability to do images and docs is cool
[20:27] <rick_h_> but we'd have to do several boards 
[20:27] <rick_h_> i use it for my personal stuff but not sure it'd scale for the team
[20:27] <hatch> yeah my thoughts exactly
[20:27] <Makyo> I use trello for other projects.  Easy to get used to multiple boards.  But yeah, LK is fine for us.
[20:28] <Makyo> Though the update makes it look like trello :P
[20:28] <rick_h_> lol
[20:28] <hatch> well it can't look much worse than it does now.... :D
[20:29] <hatch> my only real complaint to it is that it could use some performance improvements
[20:29] <hatch> it tends to lag if it's open for a day
[20:31] <Makyo> I liked that performance improvement blog from Trello, actually.  That was a neat way to attack the problem.
[20:33] <hatch> yeah it was pretty cool
[20:33] <hatch> so many issues people have had to deal with becasue they dont' use YUI events :)
[20:35] <Makyo> It's almost like you really like YUI or something
[20:35] <hatch> actually I'm trying to find alternatives because I don't feel like they are trying to improve it any more
[20:35] <hatch> but nothing else exists that takes care of all of the event issues people have
[20:39] <hatch> I'm hoping they surprise me but I have heard that the YUI team is being pulled to work on other Y! projects all the time
[20:48] <hatch> one really cool thing we could take advantage of is to start writing ES6 modules and then 'build' them into YUI modules
[20:48] <hatch> which is not supported
[20:48] <hatch> s/not/now
[21:06] <hatch> jujugui I accidentally cloned my juju-gui repo using https and now I need to push back up, anyone know if I can tell it to use the ssh keys?
[21:07] <hazmat> hatch, remove remote.. add new one with ssh
[21:08] <Makyo> or remote set-url
[21:08] <hatch> hazmat ahh good one thanks
[21:18] <hatch> jujugui lf a review/qa for https://github.com/juju/juju-gui/pull/136 it will require a real env being spun up
[21:23] <hatch> dont' everyone jump up at once :D
[21:34] <hatch> the tests passed.....does that help? :)
[21:57] <rick_h_> hatch: hah, might try to look at it tonight at the coffee shop
[22:00] <hatch> jujugui lf a review/qa for https://github.com/juju/juju-gui/pull/137 qa can be done in sandbox
[22:01] <hatch> rick_h_ thanks, anything you'd like me on for the rest of the day?
[22:01] <rick_h_> hatch: want to peek at that safari test failure?
[22:02] <rick_h_> would be cool to include safari support in 1.0
[22:02] <hatch> sure thing
[22:02] <rick_h_> hatch: or that damn IE test failure
[22:02] <rick_h_> hatch: that one will get you cake from teammates
[22:02] <hatch> lol, the intermittent one?
[22:03] <rick_h_> hatch: but yea, just anything in the high maint for now I think
[22:03] <rick_h_> hatch: yea, the one that took Makyo 4 landing attempts to get through yesterday
[22:03] <hatch> ok I'll take a look at that one
[22:03] <rick_h_> thanks
[22:04] <hatch> I have now blocked the high lane, so someone HAS to review my branches :P
[22:04] <rick_h_> lol, yea I'll look at them either tonight or in the morning. Should have you unblocked then
[22:04] <hatch> oh I'm not blocked....everyone else is lol
[22:16] <hatch> and my latest branch just failed because of that ie failure hah
[22:17] <hatch> I was able to reproduce locally though so that's good
[22:17] <hatch> odly enough it's in help-dropdown.js ....
[22:22] <rick_h_> hatch: yea, but it's in the mask bits and it started when you added drop mask support
[22:23] <rick_h_> hatch: so I have a feeling there's some mask collision/race conditoin there between the two
[22:24] <hatch> well actually, it shouldn't work at all
[22:24] <hatch> it should throw the error 100% of the time
[22:24] <hatch> haha
[22:45] <hatch> hmm interesting...
[22:51] <hatch> jujugui anyone still around want to help me sanity check something?
[22:51] <Makyo> hatch,  sure.
[22:51] <hatch> Makyo ok switch to a fresh develop branch
[22:51] <Makyo> (is this the help-dropdown thing? Because I couldn't repro after I ran updates)
[22:52] <hatch> in app.js:846 put a debugger statement
[22:52] <Makyo> Okay.
[22:52] <rick_h_> marcoceppi: why for I get hangout things from you? Are you trying to send me messages?
[22:52] <hatch> and then in test_app put a .only on 719
[22:52] <hatch> and then put another debugger in help-dropdown.js:91
[22:53] <hatch> now....when you run the tests using test-server (in chrome is fine)
[22:53] <hatch> when you hit the app.js debugger you should run Y.one('#help-dropdown') and it'll give you `null`
[22:53] <hatch> then when you hit the debugger in help-dropdown run this.get('container')
[22:54] <hatch> it will be the rendered template somehow?
[22:54] <hatch> it 'should' be null because it was set as null on instantiation
[23:00] <Makyo> Maybe Null isn't what we want?  The default container is a <div> Node, but you can override this in a subclass, or by passing in a custom container config value at instantiation time. If you override the default container in a subclass using ATTRS, you must use the valueFn property. The view's constructor will ignore any assignments using value.

[23:00] <hatch> well null definitely isn't want we want, but it's null because the Y.one('#help-dropdown') returns null because the element doesn't exist...
[23:01] <hatch> I'll look into the view source
[23:01] <hatch> sec
[23:01] <Makyo> #main is indeed empty.  Did you mean to instantiate something there?
[23:02] <hatch> ahhh I think I found it...http://yuilibrary.com/yui/docs/api/files/app_js_view.js.html#l347
[23:03] <hatch> it looks like if a container is falsy then it makes a new one from a '<div/>' string
[23:03] <Makyo> Yyyyeah.  That's the part I quoted.  It's expecting a Node, per the docs.
[23:03] <hatch> the question is....why does it sometimes have an element and sometimes not
[23:04] <hatch> we can fix it pretty easily now that we know what the issue is....but why is there an issue there....it 'should' just create a div and be done with it
[23:04] <Makyo> It is an empty div, for me.
[23:04] <Makyo> this.get('container').getDOMNode() 
[23:04] <marcoceppi> rick_h_: no, it's Google being crazy
[23:04] <hatch> the only place it won't be is in IE......sometimes
[23:05] <hatch> that's whats causing the failure
[23:05] <Makyo> Yes.
[23:05] <hatch> so what I'm saying is....I don't see why it doesn't create the element.....sometimes
[23:06] <Makyo> Which is where we were at the start. :)
[23:06] <hatch> basically the Y.Node.create('<div/>') is returning an empty object.....sometimes haha
[23:06] <hatch> well I'm going to fix the instantiation code to solve this issue but I am really curious as to why it's failing
[23:15] <hatch> ahhhh I figured it out
[23:16] <hatch> it's a race condition between the mocking of Y.one() and the Y.Node.create() call 
[23:18] <hatch> You cannot stub Y.One without Y.Node.create() failing see http://yuilibrary.com/yui/docs/api/files/node_js_node-create.js.html#l20
[23:18] <hatch> Y.one()
[23:19] <hatch> Makyo thanks for sanity checking that
[23:19] <Makyo> hatch,  np
[23:29] <rick_h_> hatch: wtf is 'launchpad' downloading 2gb for?
[23:29] <hatch> rick_h_ not sure?
[23:29] <hatch> I'm going to need some more information
[23:30] <rick_h_> hatch: just launchpad has a progress bar says downloding 700mb of 2gb
[23:30] <hatch> hmm link?
[23:30] <rick_h_> and no idea what it's downloading. Nothing in there says what it is
[23:30] <rick_h_> it's launchpad on the mac
[23:30] <rick_h_> no link, just in my little app bar 
[23:30] <hatch> ohhh
[23:30] <hatch> I thought you were talking about launchpad.net
[23:30] <hatch> :D
[23:31] <rick_h_> heh, no, looks like it's over a little star icon, imovie?
[23:31] <hatch> if you open up App Store
[23:31] <rick_h_> doesn't show anything in my app store app, but in the launchpad UI there's downloading
[23:31] <hatch>  it should show your downloads
[23:31] <hatch> hmm that's odd
[23:32] <hatch> iMovie is a star
[23:32] <hatch> so that could be it
[23:32] <hatch> it showed all of the downloads in the app store for me
[23:33] <hatch> but thenagain the app store software is a little bonkers sometimes
[23:33] <hatch> so it might show up later lol
[23:38] <hatch> rick_h_ hows the resolution?
[23:38] <rick_h_> it's ok so far. Not gotten editors and such going
[23:38] <rick_h_> just trying to get through setup, install updates, etc
[23:38] <rick_h_> before heading to the coffee shop
[23:41] <hatch> cool, well if you run into anything else odd I'll see what I can help :)