[00:36] Evening [01:34] evening [01:58] nice: http://arstechnica.com/gadgets/2013/02/dells-linux-ultrabook-gets-more-pixels-european-availability/ [01:59] yea, definitely cool [02:40] wait wait [02:40] huh? [02:46] the suspense is killing me [02:48] waiting for JoDee to get out of class [02:50] oh that's less dramatic [02:55] yeah, it should have been read in a singsong voice, rather than an urgent tone. [04:08] heh [12:09] http://r.bmark.us/u/a7d2eb71a36a9b ?! [12:09] * rick_h_ gets wallet out [13:01] Is it just me, or does anyone else have a hard time getting excited over a tablet? [13:02] I mean, cool if you're into that sort of thing, but felt like some artificial buzz [13:04] I <3 my N7 [13:04] Yeah, don't get me wrong, I think the N7 is pretty cool [13:06] Just the "Ubuntu Tablet, woo woo" is leaving me a little cold. [13:07] I'll fully admit I'm probably not the target audience. ;) [13:07] Is it out? [13:08] announcement today [13:08] but it'll probably just be the software, hopefully runs on an N7 [13:09] probably? hopefully? You're our inside man! Get the real scoop! [13:09] lol, but I don't know anything. I didn't know anything until the page went up on the site [13:09] Considering the "on penalty of death" threats I've heard some folks say... ;) [13:09] and $#@$# sirius for being a pita to deal with online [13:09] A little death never hurt anyone [13:09] rick_h_: Use the SB [13:10] there's a client for it [13:10] snap-l: it's for the car stuff [13:10] You'll need the hardware radio, though. It doesn't support software clients. [13:10] it's got the traffic/etc and supposedly can get weather but maybe [13:10] rick_h_: Yeah, but get the client [13:10] k [13:10] Was the way I listened to marketplace t 6pm. ;) [13:11] May as well get the most out of your 3 months [13:11] yea, but looking into what it'll run after that. Like the traffic bits, not sure on the radio [13:11] I'll definitely use it down to Atl and back [13:11] trial it up ftw [13:11] Oh definitely [13:12] That's the only reason I like Sirius is for those long drives [13:12] but other than that, I have podcasts. [13:12] though I've got 3 audio books that I could put a dent into [13:12] and with bluetooth support in the car, I have one less thing to fiddle with [13:12] and can do pandora over BT from the phone as well [13:13] rick_h_: You're indirectly paying for Sirius. ;) [13:13] Might as well use it. [13:13] yea [13:13] I let ours expire. [13:13] and now I'm getting the "6 months free with purchase of a year sub" [13:13] lol [13:14] Is the Taureg AWD? [13:14] brousch: yea [13:14] I could've used it today [13:14] brousch: pretty good offroad/etc based on reviews and the like [13:15] but honestly I had that with the subaru pretty well so just hoping it keeps up [13:15] I'm already nervous because it's more a 90% front / 10% rear by default for road driving [13:15] vs subaru 50/50 all the time [13:16] Subaru's annoys me sometimes [13:16] Takes too long to transfer power to different wheels [13:17] I think a manual subaru would be better [13:17] meh, never gave me grief. <3 [13:18] I didn't notice it as much with the 2000 Forester, but our 2009 Forester is laggy [13:19] well forester is the poor mans subie imo. Outback or bust [13:19] when I got my last outback I drove a forester around and it did not feel nearly as well put together as the outback [13:20] road noise, ergo, everything seemed not up to snuff [13:20] I agree, but it's the wife's car [13:20] gotcha [13:29] ls [13:29] bah [13:30] IRCINYCL [13:34] OAYFM [13:34] Once Again You Fail Me [13:35] win7 doesn't have telnet? w.t.f. [13:35] rick_h_: Why would it? [13:36] because it has before through time from the cli [13:36] I'm surprised it doesn't ship with PuTTY [13:36] Other than PuTTY is a miserable interface to a mediocre SSH client [13:36] PuTTY is my savior [13:37] It's a heaping dose of adequate [13:37] Well-put [13:44] and $@#$@# IE10 and it's developer tools of suck [13:45] Heh [13:45] OMG you have to use IE? [13:45] have to have our thing work in IE10 [13:48] That's one thing I love about my big internal project. I can give the finger to IE [13:49] I've trained everyone to use Chrome and FF, and only use IE when a site doesn't work on the good browsers [14:10] This is awesome http://www.engadget.com/2013/02/19/3doodler/ [14:25] rick_h_: seeking critisism: https://github.com/jrwren/aggregate [14:25] jrwren: about to jump on a call but will look in a few [14:26] jrwren: Looks interesting [14:26] brousch: not pointless? [14:27] Depends on the data you're dealing with [14:27] right. [14:27] jrwren: What are you looking to do? Keep a running average on a list or dict of data? [14:28] the goal is to aggregate an long stream. e.g. I'll be running it on a generator which generates 4+GB of data [14:28] with min and max? [14:28] snap-l: the running aggregates part isn't done yet, but that is the next step. [14:28] I could see this being useful [14:28] snap-l: ultimately, "maybe" but the goal is simply to be able to aggregate a large set. [14:29] Yeah, at the very least the min / max stuff would be nice. [14:29] e.g. I'm reading binary data from a custom database on disk, turning the records into dict and yielding each one in a generator - so only 1 record is ever in memory at a time, but I can aggregate nicely. [14:29] running average would be tricky [14:29] hrm. I should document what i just said. [14:29] jrwren: Bingo. ;) [14:29] running average is easy. I have running sum and running count :) [14:29] Ah, right [14:29] derp [14:32] jrwren: Make sure this doesn't suit you http://matplotlib.org/api/mlab_api.html#matplotlib.mlab.rec_groupby [14:32] i'd have to use numpy [14:32] which means loading everything into a numpy array. [14:33] i'm actually dealing with datasets which cannot fit into memory on the machine on which I'll be running them. [14:33] Also maybe http://pandas.pydata.org/pandas-docs/stable/groupby.html [14:33] heh [14:33] Big Data! [14:33] its not really. [14:33] its more like small machine :) [14:35] Confucius say, "Big Data in Small Machine make for painful pleasure" [14:35] <_stink_> hahaha [14:36] <_stink_> stolen [14:43] jrwren: Just use RLE. ;) [14:43] I'll show myself the door now. [14:51] snap-l: RLE? [14:51] jrwren: my first reaction, with the knowledge that you're talking stream 4GB/etc is that databases already do this and a tmp table would do it in a hurry and you can then do cooler work like multi process loading of data or the like if that's io bound. [14:51] Run Length Encoding. ;) [14:52] don't have 4gb of memory? ouch [14:52] snap-l: oh, not on this dataset. [14:52] its running on a 32bit machine. [14:52] jrwren: I'm kidding. [14:52] so 2GB process limit :) [14:52] meh, cut --help :P [14:53] rick_h_: you might be right. [14:53] I did do exactly what you just said before doing this - pulled it into postgresql got the numbers I needed. [14:53] jrwren: TOo bad you're not in a physics lab. I'm pretty sure RLE would work there. (All the data is a series of ones or zeroes) [14:54] snap-l: rofl. [14:54] jrwren: so I mean this is cool and solves your problem or what not, but just not how I'd think to go about it myself. [14:54] rick_h_: not loading things into postgresql was an assumed constraint. [14:54] purhaps it was a false assumption on my part. [14:54] Beware false assumptions. [14:55] jrwren: even something like a NOSQL db could work as you could create a view that does the agg and start loading and it'll update per insert [14:55] First commandment: Thou shalt have no other assumptions besides me [14:55] perhaps automating postgresql import is the right way [14:55] jrwren: if you're sure this is all you need then cool go for it. I just find that these types of things end up getting new twisted requirements over time and the solution doesn't scale up/out like a real db [14:55] rick_h_: which NOSQL db? mongo sucks to go to disk. redis is KV. what do you recommend? [14:56] jrwren: honestly, I was thinking about couch and how you create a view it auto updates the view on insert [14:56] i'll explore that next. thanks. [14:56] so load time takes the hit but reading is instant since it's pre-calculated [14:56] right. [14:56] not sure if you've got a need for multiple reads/etc [14:56] Also PostgreSQL can emulate KV and store JSON [14:57] heh, pgsql hstore on group_by value = sum :P [14:57] snap-l: exactly, which is why I never bother with mongo or redis :) [14:57] but that will probably be slower because you will run out of ram and the table will hit disk, but it's a TON more flexible. [14:57] so depends on your goals. [14:57] rick_h_: views. ;) [14:58] snap-l: not following [14:58] use a view to get the average [14:58] Though you take the compute hit every time you want it. ;) [14:58] i've already done exactly all of this with postgresql. [14:59] right, view only helps if it's a materialized and such but yea. That's what I mean though. It's a solved problem there [14:59] use the exact same py generator reader to create a COPY FROM import, then index on 1 column, then do my aggregation. [14:59] it works well. [14:59] this agg module can help a specific use case, but then it seems like there'd be numpy/etc stuff that does this as well. [14:59] but I've not used it enough to speak about it intelligently [14:59] the issue is the requirement is run on ec2 w/out ebs. so I'd have to recreate the postgresql on demand. [15:00] Yeah, and I'm not sure how efficient it is either. [15:00] afaik numpy/etc all needs to fit into ram. [15:00] jrwren: copy/load from s3 [15:00] rick_h_: its in s3 now, as raw data. [15:00] ah gotcha [15:01] What's to stop you from keeping the database files on S3? [15:01] (seriously doesn't understand S3) [15:01] s3 is slow. [15:01] because pgsql splits things into a ton of diff files based on the block size and it'd be a mess in s3 [15:02] kk [15:02] and yea, compared to local it's very slow, but pretty fast as far as internet goes especially on ec2 inside the network [15:02] heck of a lot faster than uploading from your machine to the instance [15:02] unless you're in KC I guess :P [15:02] lol. [15:02] given its a TB of data... [15:03] oooh, thought it was 4GB? [15:03] lol, no. [15:03] the whole point is that i never load more than 1 record into ram at once. [15:03] i just assume a 2GB limit since I'm on a 32bit instance. [15:04] "it doesn't fit into ram" is the #1 requirement. [15:04] well hell, for TB of data we're talking hadoop/reduce functions [15:04] nope, will not JVM. [15:04] no hadoop allowed :) [15:04] unless of course, we rewrite hadoop. [15:04] pydoop. [15:05] thankfully my coworkers and mgmt have the same aversion to jvm that I do. [15:06] https://github.com/michaelfairley/mincemeatpy or http://mikecvet.wordpress.com/2010/07/02/parallel-mapreduce-in-python/ for some fun weekend tinkering :) [15:07] http://engineeringblog.yelp.com/2010/10/mrjob-distributed-computing-for-everybody.html is cool but uses hadoop under the hood [15:07] thanks. [15:08] but yea, TB of data like that is a straight up map/reduce problem and there's stuff to do that. [15:08] and one of the things couch did kind of cool [15:08] except I can do it in a timely manner using exactly what i jsut wrote. [15:08] MR is overkill. [15:09] yea, so what are you doing then? Splitting one s3 file into a dozen, firing up 12 micro workers, and running your script? [15:09] i mean, its only 1TB of data. I can read that off disk in a reasonably short period of time. IO is vastly slower than proc, so some math on whawt I just read is near zero overhead. [15:10] thousands of s3 files, single small instance, for i in files; do keepsum $i ; done [15:10] something like that :) [15:10] gotcha [15:10] you're doing MR, just your micro framework for it committed to a specific function run :P [15:10] i will probably never process teh whole TB at once. [15:10] of course. [15:10] EVERYTHING is map and reduce. [15:11] lol [15:11] no no - its true. [15:11] its a fundamentalism of functional programming. [15:11] and map is just a special case of reduce. [15:11] or is that the other way around, I forget. [15:12] i really appreciate the dialog. I'll try some other things too. [15:12] TY [16:12] http://www.omgubuntu.co.uk/2013/02/ubuntu-for-tablet-unveiled-by-canonical-nexus-7-download-coming-thursday [16:12] I almost bought a Nexus 10 [16:13] yea, I'm on the edge of doing that right now [16:13] love the N7 and curious if a 10 would be any good. This way I could test out the ubuntu on it and keep my N7 for the handy stuff I love on it [16:13] what sucks about the 7 and 10 [16:13] is they're totally different hardware [16:13] yea :/ [16:13] two diff companies [16:13] at first I was like "oh cool, same hardware, different form factor." [16:13] of course, that would be too easy [16:13] lol [16:21] its only 200G of ram :) [16:51] oooh, snow...go go go. I want to go play [16:53] Blazeix: have you used google maps api some? [17:22] ...going...to...have...meltdown... [17:22] What happened? [17:23] this stupid project is setup in the most stupid way and I'm getting tired of working around it to submit a stupid 2line patch... [17:23] every time I fix one thing another is broken...dammit these aren't stupid devs. Why are they trying to make me want to press the nukes button? [17:24] They don't know any better? [17:25] Or better left the station a while ago, and now we're stuck with a culture of meh? [18:17] http://www.ubuntu.com/devices/tablet does look kind of cool (the video) [18:51] Who dressed Shuttleworth for that video? [18:51] He looks like a homeless guy [18:55] Come on. You're a billionaire playboy cosmonaut pimping the future of technology. Trim the chesthair! [18:58] <_stink_> depends on what he's trying to attract [18:59] werewolves? [19:01] <_stink_> mmm [19:02] Or are you saying you want to dig your hands deep into his chestfur and give it a playful tug? [19:03] <_stink_> suddenly i see your point [19:10] _stink_: I think you mean you can't unsee his point. [19:12] Why are you guys looking at my point? Don't make me put pants on. [19:21] <_stink_> good thing years of scientific training has ruined my imagination. [20:08] snow squals wooo [20:57] he doea not look homeless. [20:58] n0p: hey [20:58] "gracefully on different screen sizes and resolutions" HOW did THey DO THAT?!?! [20:59] jrwren: The desktop is a responsive web page. ;) [20:59] oh. [21:00] Actually, I'm not sure how they did that, but I have a feeling that might not be that far off [21:00] SVG all the things [21:00] i look forward to hearing about an official dev kit :) [21:00] I'm surprised more devices don't do SVG natively [21:01] that and how Apple does their graphcis (PDF / Postscript) seem like the best way to tackle different resolutions. [21:02] its not how apple does different resolutions though. [21:02] Right [21:03] I know that's how they scale their icons, though [21:05] Wonder how much work it would be to apply a SVG-like canvas to mobile devices [21:05] so instead of saying "plot this bitmap at 40x50", you could say "object blah blah is this size relative to the canvas, and is relatively in this position" [21:06] maybe my request answers my question of how much work it would be. ;) [21:08] You can make a GUI using percentages and relative sizes with Kivy [21:08] snap-l: no, you are wrong. [21:08] that is now how apple scales their icons. [21:08] at least not on iOS. [21:08] apple does not scale their icons. [21:08] you ship every needed icon resolution with your appl. [21:08] jrwren: On OSX I believe that's how it works [21:08] with your app. [21:08] Yeah, and I think that's dumb. :) [21:08] yeah, on OSX, it must. [21:08] i mean zooming the dock is the example, right? [21:08] right [21:09] I know that's how it's done on Android, because when I dumped JoDee's SD card for images there were a ton of differnet sized icons [21:15] http://developer.android.com/guide/practices/ui_guidelines/icon_design.html [21:16] You need like half a dozen different icons, and then you need to make them for multiple different densities [21:16] yeah, that's dumb [21:17] SVG all the things. ;) [21:18] They suggest you use a vector image so that making all the static images will be easier [21:22] I know this is probably to help make slower devices not have to work so hard, but I'd prefer it to render that stuff on the fly [21:22] a real developer would create one SVG and automate making the little ones from it [21:23] Actually a real developer would write a program to generate the SVG and the little ones [21:23] A better way would be to have the environment handle this stuff so the developer didn't have to work so hard. :) [21:24] And the resolution problem would disappear overnight [21:24] Well really it's the designer working hard, so who cares? [21:24] Just seems like we make things hard on purpose. [21:28] anyone deal with ubuntu preseed or kickstart and have any recommendations for me? [21:30] gah, wtf, this can't 404, not allowed! http://people.canonical.com/~kirkland/ [22:19] He's no longer with Canonical, afaik [22:20] * greg-g has a new email [22:21] greg@wikimedia.org ;) [22:29] congrats greg-g [22:30] when are you moving to AA? ;] [22:35] give me a little time.... ;) [22:40] Is that a promise? :) [22:47] well, not totally, only a "not-SF" promise, destination may be non-A2 in the end