[00:36] <snap-l> Evening
[01:34] <rick_h_> evening
[01:58] <Blazeix> nice: http://arstechnica.com/gadgets/2013/02/dells-linux-ultrabook-gets-more-pixels-european-availability/
[01:59] <rick_h_> yea, definitely cool
[02:40] <snap-l> wait wait
[02:40] <rick_h_> huh?
[02:46] <Blazeix> the suspense is killing me
[02:48] <snap-l> waiting for JoDee to get out of class
[02:50] <rick_h_> oh that's less dramatic
[02:55] <Blazeix> yeah, it should have been read in a singsong voice, rather than an urgent tone.
[04:08] <snap-l> heh
[12:09] <rick_h_> http://r.bmark.us/u/a7d2eb71a36a9b ?!
[12:09]  * rick_h_ gets wallet out
[13:01] <snap-l> Is it just me, or does anyone else have a hard time getting excited over a tablet?
[13:02] <snap-l> I mean, cool if you're into that sort of thing, but felt like some artificial buzz
[13:04] <rick_h_> I <3 my N7
[13:04] <snap-l> Yeah, don't get me wrong, I think the N7 is pretty cool
[13:06] <snap-l> Just the "Ubuntu Tablet, woo woo" is leaving me a little cold.
[13:07] <snap-l> I'll fully admit I'm probably not the target audience. ;)
[13:07] <brousch> Is it out?
[13:08] <rick_h_> announcement today
[13:08] <rick_h_> but it'll probably just be the software, hopefully runs on an N7
[13:09] <brousch> probably? hopefully? You're our inside man! Get the real scoop!
[13:09] <rick_h_> lol, but I don't know anything. I didn't know anything until the page went up on the site
[13:09] <snap-l> Considering the "on penalty of death" threats I've heard some folks say... ;)
[13:09] <rick_h_> and $#@$# sirius for being a pita to deal with online
[13:09] <brousch> A little death never hurt anyone
[13:09] <snap-l> rick_h_: Use the SB
[13:10] <snap-l> there's a client for it
[13:10] <rick_h_> snap-l: it's for the car stuff
[13:10] <snap-l> You'll need the hardware radio, though. It doesn't support software clients.
[13:10] <rick_h_> it's got the traffic/etc and supposedly can get weather but maybe
[13:10] <snap-l> rick_h_: Yeah, but get the client
[13:10] <rick_h_> k
[13:10] <snap-l> Was the way I listened to marketplace t 6pm. ;)
[13:11] <snap-l> May as well get the most out of your 3 months
[13:11] <rick_h_> yea, but looking into what it'll run after that. Like the traffic bits, not sure on the radio
[13:11] <rick_h_> I'll definitely use it down to Atl and back
[13:11] <rick_h_> trial it up ftw
[13:11] <snap-l> Oh definitely
[13:12] <snap-l> That's the only reason I like Sirius is for those long drives
[13:12] <snap-l> but other than that, I have podcasts.
[13:12] <rick_h_> though I've got 3 audio books that I could put a dent into
[13:12] <snap-l> and with bluetooth support in the car, I have one less thing to fiddle with
[13:12] <rick_h_> and can do pandora over BT from the phone as well
[13:13] <snap-l> rick_h_: You're indirectly paying for Sirius. ;)
[13:13] <snap-l> Might as well use it.
[13:13] <rick_h_> yea
[13:13] <snap-l> I let ours expire.
[13:13] <snap-l> and now I'm getting the "6 months free with purchase of a year sub"
[13:13] <rick_h_> lol
[13:14] <brousch> Is the Taureg AWD?
[13:14] <rick_h_> brousch: yea
[13:14] <brousch> I could've used it today
[13:14] <rick_h_> brousch: pretty good offroad/etc based on reviews and the like
[13:15] <rick_h_> but honestly I had that with the subaru pretty well so just hoping it keeps up
[13:15] <rick_h_> I'm already nervous because it's more a 90% front / 10% rear by default for road driving
[13:15] <rick_h_> vs subaru 50/50 all the time
[13:16] <brousch> Subaru's annoys me sometimes
[13:16] <brousch> Takes too long to transfer power to different wheels
[13:17] <brousch> I think a manual subaru would be better
[13:17] <rick_h_> meh, never gave me grief. <3
[13:18] <brousch> I didn't notice it as much with the 2000 Forester, but our 2009 Forester is laggy
[13:19] <rick_h_> well forester is the poor mans subie imo. Outback or bust
[13:19] <rick_h_> when I got my last outback I drove a forester around and it did not feel nearly as well put together as the outback
[13:20] <rick_h_> road noise, ergo, everything seemed not up to snuff
[13:20] <brousch> I agree, but it's the wife's car
[13:20] <rick_h_> gotcha
[13:29] <snap-l> ls
[13:29] <snap-l> bah
[13:30] <brousch> IRCINYCL
[13:34] <snap-l> OAYFM
[13:34] <snap-l> Once Again You Fail Me
[13:35] <rick_h_> win7 doesn't have telnet? w.t.f.
[13:35] <snap-l> rick_h_: Why would it?
[13:36] <rick_h_> because it has before through time from the cli
[13:36] <snap-l> I'm surprised it doesn't ship with PuTTY
[13:36] <snap-l> Other than PuTTY is a miserable interface to a mediocre SSH client
[13:36] <brousch> PuTTY is my savior
[13:37] <snap-l> It's a heaping dose of adequate
[13:37] <brousch> Well-put
[13:44] <rick_h_> and $@#$@# IE10 and it's developer tools of suck
[13:45] <snap-l> Heh
[13:45] <brousch> OMG you have to use IE?
[13:45] <rick_h_> have to have our thing work in IE10
[13:48] <brousch> That's one thing I love about my big internal project. I can give the finger to IE
[13:49] <brousch> I've trained everyone to use Chrome and FF, and only use IE when a site doesn't work on the good browsers
[14:10] <brousch> This is awesome http://www.engadget.com/2013/02/19/3doodler/
[14:25] <jrwren> rick_h_: seeking critisism: https://github.com/jrwren/aggregate
[14:25] <rick_h_> jrwren: about to jump on a call but will look in a few
[14:26] <brousch> jrwren: Looks interesting
[14:26] <jrwren> brousch: not pointless?
[14:27] <brousch> Depends on the data you're dealing with
[14:27] <jrwren> right.
[14:27] <snap-l> jrwren: What are you looking to do? Keep a running average on a list or dict of data?
[14:28] <jrwren> the goal is to aggregate an long stream. e.g. I'll be running it on a generator which generates 4+GB of data
[14:28] <snap-l> with min and max?
[14:28] <jrwren> snap-l: the running aggregates part isn't done yet, but that is the next step.
[14:28] <snap-l> I could see this being useful
[14:28] <jrwren> snap-l: ultimately, "maybe"  but the goal is simply to be able to aggregate a large set.
[14:29] <snap-l> Yeah, at the very least the min / max stuff would be nice.
[14:29] <jrwren> e.g. I'm reading binary data from a custom database on disk, turning the records into dict and yielding each one in a generator - so only 1 record is ever in memory at a time, but I can aggregate nicely.
[14:29] <snap-l> running average would be tricky
[14:29] <jrwren> hrm. I should document what i just said.
[14:29] <snap-l> jrwren: Bingo. ;)
[14:29] <jrwren> running average is easy. I have running sum and running count :)
[14:29] <snap-l> Ah, right
[14:29] <snap-l> derp
[14:32] <brousch> jrwren: Make sure this doesn't suit you http://matplotlib.org/api/mlab_api.html#matplotlib.mlab.rec_groupby
[14:32] <jrwren> i'd have to use numpy
[14:32] <jrwren> which means loading everything into a numpy array.
[14:33] <jrwren> i'm actually dealing with datasets which cannot fit into memory on the machine on which I'll be running them.
[14:33] <brousch> Also maybe http://pandas.pydata.org/pandas-docs/stable/groupby.html
[14:33] <brousch> heh
[14:33] <brousch> Big Data!
[14:33] <jrwren> its not really.
[14:33] <jrwren> its more like small machine :)
[14:35] <brousch> Confucius say, "Big Data in Small Machine make for painful pleasure"
[14:35] <_stink_> hahaha
[14:36] <_stink_> stolen
[14:43] <snap-l> jrwren: Just use RLE. ;)
[14:43] <snap-l> I'll show myself the door now.
[14:51] <jrwren> snap-l: RLE?
[14:51] <rick_h_> jrwren: my first reaction, with the knowledge that you're talking stream 4GB/etc is that databases already do this and a tmp table would do it in a hurry and you can then do cooler work like multi process loading of data or the like if that's io bound.
[14:51] <snap-l> Run Length Encoding. ;)
[14:52] <rick_h_> don't have 4gb of memory? ouch
[14:52] <jrwren> snap-l: oh, not on this dataset.
[14:52] <jrwren> its running on a 32bit machine.
[14:52] <snap-l> jrwren: I'm kidding.
[14:52] <jrwren> so 2GB process limit :)
[14:52] <rick_h_> meh, cut --help :P
[14:53] <jrwren> rick_h_: you might be right.
[14:53] <jrwren> I did do exactly what you just said before doing this - pulled it into postgresql got the numbers I needed.
[14:53] <snap-l> jrwren: TOo bad you're not in a physics lab. I'm pretty sure RLE would work there. (All the data is a series of ones or zeroes)
[14:54] <jrwren> snap-l: rofl.
[14:54] <rick_h_> jrwren: so I mean this is cool and solves your problem or what not, but just not how I'd think to go about it myself.
[14:54] <jrwren> rick_h_: not loading things into postgresql was an assumed constraint.
[14:54] <jrwren> purhaps it was a false assumption on my part.
[14:54] <snap-l> Beware false assumptions.
[14:55] <rick_h_> jrwren: even something like a NOSQL db could work as you could create a view that does the agg and start loading and it'll update per insert
[14:55] <snap-l> First commandment: Thou shalt have no other assumptions besides me
[14:55] <jrwren> perhaps automating postgresql import is the right way
[14:55] <rick_h_> jrwren: if you're sure this is all you need then cool go for it. I just find that these types of things end up getting new twisted requirements over time and the solution doesn't scale up/out like a real db
[14:55] <jrwren> rick_h_: which NOSQL db?  mongo sucks to go to disk. redis is KV. what do you recommend?
[14:56] <rick_h_> jrwren: honestly, I was thinking about couch and how you create a view it auto updates the view on insert
[14:56] <jrwren> i'll explore that next. thanks.
[14:56] <rick_h_> so load time takes the hit but reading is instant since it's pre-calculated
[14:56] <jrwren> right.
[14:56] <rick_h_> not sure if you've got a need for multiple reads/etc
[14:56] <snap-l> Also PostgreSQL can emulate KV and store JSON
[14:57] <rick_h_> heh, pgsql hstore on group_by value = sum :P
[14:57] <jrwren> snap-l: exactly, which is why I never bother with mongo or redis :)
[14:57] <rick_h_> but that will probably be slower because you will run out of ram and the table will hit disk, but it's a TON more flexible.
[14:57] <rick_h_> so depends on your goals.
[14:57] <snap-l> rick_h_: views. ;)
[14:58] <rick_h_> snap-l: not following
[14:58] <snap-l> use a view to get the average
[14:58] <snap-l> Though you take the compute hit every time you want it. ;)
[14:58] <jrwren> i've already done exactly all of this with postgresql.
[14:59] <rick_h_> right, view only helps if it's a materialized and such but yea. That's what I mean though. It's a solved problem there
[14:59] <jrwren> use the exact same py generator reader to create a COPY FROM import, then index on 1 column, then do my aggregation.
[14:59] <jrwren> it works well.
[14:59] <rick_h_> this agg module can help a specific use case, but then it seems like there'd be numpy/etc stuff that does this as well.
[14:59] <rick_h_> but I've not used it enough to speak about it intelligently
[14:59] <jrwren> the issue is the requirement is run on ec2 w/out ebs. so I'd have to recreate the postgresql on demand.
[15:00] <snap-l> Yeah, and I'm not sure how efficient it is either.
[15:00] <jrwren> afaik numpy/etc all needs to fit into ram.
[15:00] <rick_h_> jrwren: copy/load from s3
[15:00] <jrwren> rick_h_: its in s3 now, as raw data.
[15:00] <rick_h_> ah gotcha
[15:01] <snap-l> What's to stop you from keeping the database files on S3?
[15:01] <snap-l> (seriously doesn't understand S3)
[15:01] <jrwren> s3 is slow.
[15:01] <rick_h_> because pgsql splits things into a ton of diff files based on the block size and it'd be a mess in s3
[15:02] <snap-l> kk
[15:02] <rick_h_> and yea, compared to local it's very slow, but pretty fast as far as internet goes especially on ec2 inside the network
[15:02] <rick_h_> heck of a lot faster than uploading from your machine to the instance
[15:02] <rick_h_> unless you're in KC I guess :P
[15:02] <jrwren> lol.
[15:02] <jrwren> given its a TB of data...
[15:03] <rick_h_> oooh, thought it was 4GB?
[15:03] <jrwren> lol, no.
[15:03] <jrwren> the whole point is that i never load more than 1 record into ram at once.
[15:03] <jrwren> i just assume a 2GB limit since I'm on a 32bit instance.
[15:04] <jrwren> "it doesn't fit into ram" is the #1 requirement.
[15:04] <rick_h_> well hell, for TB of data we're talking hadoop/reduce functions
[15:04] <jrwren> nope, will not JVM.
[15:04] <jrwren> no hadoop allowed :)
[15:04] <jrwren> unless of course, we rewrite hadoop.
[15:04] <jrwren> pydoop.
[15:05] <jrwren> thankfully my coworkers and mgmt have the same aversion to jvm that I do.
[15:06] <rick_h_> https://github.com/michaelfairley/mincemeatpy or http://mikecvet.wordpress.com/2010/07/02/parallel-mapreduce-in-python/ for some fun weekend tinkering :)
[15:07] <rick_h_> http://engineeringblog.yelp.com/2010/10/mrjob-distributed-computing-for-everybody.html is cool but uses hadoop under the hood
[15:07] <jrwren> thanks.
[15:08] <rick_h_> but yea, TB of data like that is a straight up map/reduce problem and there's stuff to do that.
[15:08] <rick_h_> and one of the things couch did kind of cool
[15:08] <jrwren> except I can do it in a timely manner using exactly what i jsut wrote.
[15:08] <jrwren> MR is overkill.
[15:09] <rick_h_> yea, so what are you doing then? Splitting one s3 file into a dozen, firing up 12 micro workers, and running your script?
[15:09] <jrwren> i mean, its only 1TB of data. I can read that off disk in a reasonably short period of time. IO is vastly slower than proc, so some math on whawt I just read is near zero overhead.
[15:10] <jrwren> thousands of s3 files, single small instance, for i in files; do keepsum $i ; done
[15:10] <jrwren> something like that :)
[15:10] <rick_h_> gotcha
[15:10] <rick_h_> you're doing MR, just your micro framework for it committed to a specific function run :P
[15:10] <jrwren> i will probably never process teh whole TB at once.
[15:10] <jrwren> of course.
[15:10] <jrwren> EVERYTHING is map and reduce.
[15:11] <rick_h_> lol
[15:11] <jrwren> no no - its true.
[15:11] <jrwren> its a fundamentalism of functional programming.
[15:11] <jrwren> and map is just a special case of reduce.
[15:11] <jrwren> or is that the other way around, I forget.
[15:12] <jrwren> i really appreciate the dialog. I'll try some other things too.
[15:12] <jrwren> TY
[16:12] <brousch> http://www.omgubuntu.co.uk/2013/02/ubuntu-for-tablet-unveiled-by-canonical-nexus-7-download-coming-thursday
[16:12] <brousch> I almost bought a Nexus 10
[16:13] <rick_h_> yea, I'm on the edge of doing that right now
[16:13] <rick_h_> love the N7 and curious if a 10 would be any good. This way I could test out the ubuntu on it and keep my N7 for the handy stuff I love on it
[16:13] <jcastro> what sucks about the 7 and 10
[16:13] <jcastro> is they're totally different hardware
[16:13] <rick_h_> yea :/
[16:13] <rick_h_> two diff companies
[16:13] <jcastro> at first I was like "oh cool, same hardware, different form factor."
[16:13] <jcastro> of course, that would be too easy
[16:13] <rick_h_> lol
[16:21] <jrwren> its only 200G of ram :)
[16:51] <rick_h_> oooh, snow...go go go. I want to go play
[16:53] <rick_h_> Blazeix: have you used google maps api some?
[17:22] <rick_h_> ...going...to...have...meltdown...
[17:22] <snap-l> What happened?
[17:23] <rick_h_> this stupid project is setup in the most stupid way and I'm getting tired of working around it to submit a stupid 2line patch...
[17:23] <rick_h_> every time I fix one thing another is broken...dammit these aren't stupid devs. Why are they trying to make me want to press the nukes button?
[17:24] <snap-l> They don't know any better?
[17:25] <snap-l> Or better left the station a while ago, and now we're stuck with a culture of meh?
[18:17] <rick_h_> http://www.ubuntu.com/devices/tablet does look kind of cool (the video)
[18:51] <brousch> Who dressed Shuttleworth for that video?
[18:51] <brousch> He looks like a homeless guy
[18:55] <brousch> Come on. You're a billionaire playboy cosmonaut pimping the future of technology. Trim the chesthair!
[18:58] <_stink_> depends on what he's trying to attract
[18:59] <brousch> werewolves?
[19:01] <_stink_> mmm
[19:02] <brousch> Or are you saying you want to dig your hands deep into his chestfur and give it a playful tug?
[19:03] <_stink_> suddenly i see your point
[19:10] <snap-l> _stink_: I think you mean you can't unsee his point.
[19:12] <brousch> Why are you guys looking at my point? Don't make me put pants on.
[19:21] <_stink_> good thing years of scientific training has ruined my imagination.
[20:08] <rick_h_> snow squals wooo
[20:57] <jrwren> he doea not look homeless.
[20:58] <jcastro> n0p: hey
[20:58] <jrwren> "gracefully on different screen sizes and resolutions"  HOW did THey DO THAT?!?!
[20:59] <snap-l> jrwren: The desktop is a responsive web page. ;)
[20:59] <jrwren> oh.
[21:00] <snap-l> Actually, I'm not sure how they did that, but I have a feeling that might not be that far off
[21:00] <snap-l> SVG all the things
[21:00] <jrwren> i look forward to hearing about an official dev kit :)
[21:00] <snap-l> I'm surprised more devices don't do SVG natively
[21:01] <snap-l> that and how Apple does their graphcis (PDF / Postscript) seem like the best way to tackle different resolutions.
[21:02] <jrwren> its not how apple does different resolutions though.
[21:02] <snap-l> Right
[21:03] <snap-l> I know that's how they scale their icons, though
[21:05] <snap-l> Wonder how much work it would be to apply a SVG-like canvas to mobile devices
[21:05] <snap-l> so instead of saying "plot this bitmap at 40x50", you could say "object blah blah is this size relative to the canvas, and is relatively in this position"
[21:06] <snap-l> maybe my request answers my question of how much work it would be. ;)
[21:08] <brousch> You can make a GUI using percentages and relative sizes with Kivy
[21:08] <jrwren> snap-l: no, you are wrong.
[21:08] <jrwren> that is now how apple scales their icons.
[21:08] <jrwren> at least not on iOS.
[21:08] <jrwren> apple does not scale their icons.
[21:08] <jrwren> you ship every needed icon resolution with your appl.
[21:08] <snap-l> jrwren: On OSX I believe that's how it works
[21:08] <jrwren> with your app.
[21:08] <snap-l> Yeah, and I think that's dumb. :)
[21:08] <jrwren> yeah, on OSX, it must.
[21:08] <jrwren> i mean zooming the dock is the example, right?
[21:08] <snap-l> right
[21:09] <snap-l> I know that's how it's done on Android, because when I dumped JoDee's SD card for images there were a ton of differnet sized icons
[21:15] <brousch> http://developer.android.com/guide/practices/ui_guidelines/icon_design.html
[21:16] <brousch> You need like half a dozen different icons, and then you need to make them for multiple different densities
[21:16] <snap-l> yeah, that's dumb
[21:17] <snap-l> SVG all the things. ;)
[21:18] <brousch> They suggest you use a vector image so that making all the static images will be easier
[21:22] <snap-l> I know this is probably to help make slower devices not have to work so hard, but I'd prefer it to render that stuff on the fly
[21:22] <brousch> a real developer would create one SVG and automate making the little ones from it
[21:23] <brousch> Actually a real developer would write a program to generate the SVG and the little ones
[21:23] <snap-l> A better way would be to have the environment handle this stuff so the developer didn't have to work so hard. :)
[21:24] <snap-l> And the resolution problem would disappear overnight
[21:24] <brousch> Well really it's the designer working hard, so who cares?
[21:24] <snap-l> Just seems like we make things hard on purpose.
[21:28] <jrwren> anyone deal with ubuntu preseed or kickstart and have any recommendations for me?
[21:30] <jrwren> gah, wtf, this can't 404, not allowed!  http://people.canonical.com/~kirkland/
[22:19] <snap-l> He's no longer with Canonical, afaik
[22:20]  * greg-g has a new email
[22:21] <greg-g> greg@wikimedia.org ;)
[22:29] <jrwren> congrats greg-g
[22:30] <jrwren> when are you moving to AA? ;]
[22:35] <greg-g> give me a little time.... ;)
[22:40] <snap-l> Is that a promise? :)
[22:47] <greg-g> well, not totally, only a "not-SF" promise, destination may be non-A2 in the end