[00:05] what is n2n? [00:05] ntop.com [00:05] you can add tun to /etc/modules to ensure its loaded [00:06] asac: #ubuntu+1 is tunnign on it [00:07] modprobing tun fixes [00:07] it [00:07] seems lucid stop modprobing tun 2 weeks ago [00:07] now we are trying to target a few more tests :\ [00:09] check older kernel [00:09] maybe tun was not a module before [00:10] hm, java is not working out of the box in chromium, seems like a symlink is needed [00:11] weird [00:11] fta: was fine one week ago [00:11] when I tested [00:12] can't get my dynamic stock charts [00:14] fta: where does chromium look for plugins? [00:14] mozilla/plugins? [00:15] or also xulrunner-addons/plugins? [00:15] it's a bit late to buy stock though, last quotation day of the year [00:15] i should probably wait [00:16] tomrrow trading for half a day in the US afaik [00:17] yep [00:18] * asac still hopes for a new bank collapse :) [00:18] but i think that is over for now [00:19] (12:18:38 AM) seg|ars: the new backend is uncrashable [00:19] (12:19:03 AM) seg|ars: no seriously, in the next major version the backend is significantly more stable [00:19] fta: ^^^^^^^^^ (gwibber) [00:19] problem is that housing prices only got boosted here in hamburg during this whole crisis ... i hoped i could buy a cheap flat ;) [00:20] (12:04:51 AM) seg|ars: yeah, trunk has been quiet because of this rewrite [00:20] (12:04:59 AM) seg|ars: I haven't even pushed it to a branch yet. It's not quite usable at the moment [00:20] (12:05:35 AM) seg|ars: I'm working on it full-time this week [00:20] asac: boosted? here they drop like rocks [00:20] yeah [00:20] here they doubled ;) [00:20] WOUTCH [00:20] here too [00:20] seems everyone wanted a safe investment [00:20] and moved their money from spain etc. to hamburg downtown :) [00:21] really a pity [00:21] i hoped sooo much ;) [00:21] current prices are not worth it, far from it [00:21] qwll. in paris that might be true [00:22] but german city prices were really undervalued before compared to other european countries ... so i dont see them falling agian [00:22] :( [00:22] maybe if interest doubles [00:22] but that would kill spain etc. even more i guess :) [00:22] so i dont see that happening either [00:23] (12:20:01 AM) seg|ars: it uses a model similar to chrome. All of the message retrieval and processing operations are performed in separate processes [00:23] (12:20:15 AM) seg|ars: if any of those processes fail, even if they suffer a segmentation fault, the daemon just keeps on running [00:23] (12:21:36 AM) seg|ars: the process pool is created at the beginning of a refresh cycle and destroyed at the end [00:23] (12:21:43 AM) seg|ars: so it also completely insulates the backend against memory leaks [00:23] thats n2n? [00:23] whats the benefit of a peer-to-peer vpn ;) [00:23] for isolated countries> [00:23] ? [00:24] building bridges? [00:26] or just easy to setup? [00:26] seems that's gwibber [00:27] asac: that's gwibber [00:27] heh [00:27] from segphault [00:27] which has been dead for weeks/months [00:27] yeah [00:27] in #statusnet [00:27] why do you think I'm cc you ? [00:27] lol [00:27] i had no time to talk about that at UDS unfortuantely [00:27] i tried to prevent such things to happen [00:27] i would love more controls in gwibber, to depend less on the web interface [00:28] i didnt recognize his nick :) [00:28] but more controls doesnt happen by making a multi-process backend framework [00:28] asac, prevent which things to happen? [00:29] fta: for it to stale [00:29] that they put so much work into making a super scalable multi-process bakcend [00:29] sure, multi-process for gwibber seems weird to me [00:29] its a big misunderstanding. they didnt know how to do it right [00:29] do you want me to tell that to segphault ? [00:29] lol [00:29] thats why they think it helps to make it even more complex ;) [00:29] BUGabundo: i already raised that ... at least to ken [00:29] ok [00:30] unfortunately i had no time to thoroughly talk to him at UDS [00:30] fta: web interface gets better though ;) [00:31] yep, but that also means gwibber is lagging behind :P [00:31] i think the reason for all this is that there were some omnious gwibber backend hangs because of facebook [00:31] which i never saw after my final fixes ;) [00:33] I don't even have FB account [00:33] the outstanding thing is to define a good dbus api and properly deal with dbus timeouts (which still crash gwibber) [00:34] those are less likely though [00:35] ok so with some luck tomorrow we have working chromium for armel on karmic and lucid ;) in a public accessible ppa. [00:35] would be a great EOY [00:35] :-P [00:36] then we have two weeks to get that in the archive for alpha-2 :-P [00:37] but with even the gyp license bug i dont see that happening ;) [00:37] fta: do all tests succeed in the ppa? [00:38] on x86 [00:38] nope [00:38] far from it [00:39] various issues, ms fonts, network accesses forbidden, shared memory forbidden, and various other stuff [00:40] what do they attempt to do with network access [00:40] i wanted to enable perf tests, but i'm afraid it's not possible within the builders [00:40] maybe you can fire up a http server during build on some unprivileged port on localhost [00:41] test the http stack, dns stack, ftp stack, etc [00:41] i think perf tests are the last we should care about :) [00:41] yes, but what are they doing? [00:41] special webpages [00:41] ? [00:41] from some public server [00:41] ? [00:41] i really need to compare our build with the official ones, some people claim we're slower [00:42] is there data available to backup that claim? [00:42] for network, sometimes it's against google.com, sometimes against localhost (they start a small sever locally) [00:43] nope, just claims, no proof [00:43] ok so google.com is the problem? [00:43] that really feels like ok to skip [00:43] no; even localhost doesn't work [00:43] really? [00:43] yep [00:43] how? [00:43] what port? [00:43] i don't remember [00:44] that would bust PGO for firefox [00:44] we need a proxy for that too [00:44] asac: how is PGO? [00:45] dont know ;) [00:46] dead ? ;) [00:46] upstream doesnt use it afaik because it didnt work for them [00:46] i will try again though ;) [00:46] stuff like: Creating shared memory in /dev/shm/com.google.chrome.shmem.unit_tests-28518 failed: Permission denied [00:46] yeah [00:46] those i saw [00:46] thats because its in a chroot i think [00:46] bindmounting /dev somehow doesnt do that recursively for /dev [00:46] i saw that in my chroot [00:46] most likely fakeroot's fault [00:47] so udev would need to create stuff for chroot i guess (no clue what i am talking about here) [00:47] could be fakeroot ... but in chroot its definitly a problem in bindmount [00:47] not sure if builders use a chroot in a xen image or a native xen image [00:48] hmm. too bad i deleted he old builds [00:48] maybe the tests worked on the real builders [00:48] and now i only build on armel [00:49] http://launchpadlibrarian.net/37208452/buildlog_ubuntu-karmic-i386.chromium-browser_4.0.283.0~svn20091226r35283-0ubuntu1~ucd1~karmic_FULLYBUILT.txt.gz [00:49] fta: ^^thats last build log on real builders [00:50] ok same issuue [00:50] [6631:6631:1226/172727:443931680155:ERROR:base/shared_memory_posix.cc(192)] Creating shared memory in /dev/shm/com.google.chrome.shmem.SharedMemoryOpenCloseTest failed: Permission denied [00:50] base/shared_memory_unittest.cc:131: Failure [00:50] Value of: rv [00:51] fta: where are the branches for the beta/dev channel? [00:51] do you auto fork them with appropriate checkpoint commit? [00:51] or dont you even do that in branches? [00:52] hm, no, i forked .head once for each channel, then they will need merging from head :( [00:53] well. that shouldnt be a problem, would it? [00:53] the branches are on lp, like for trunk [00:53] no, but it's manual [00:53] feels natural. fork if upstream forks. then just track security landings on their branches [00:53] then merge if they merge too [00:53] of course you dont know when they merged and when they updated [00:53] but maybe you can guess that by how the version scheme moves forward [00:53] for that we first need to see non-merged landings on beta/dev channel upstream branches i guess [00:55] the bot will just trigger new builds if either the packaging branch changed, or upstream published a new build [00:56] it's just get-orig-source CHANNEL={beta,dev} [00:56] sure [00:56] but upstream has two ways to publish a new build [00:56] a) update biuld with a cherry pick (security/stability) [00:56] b) bump to new release (e.g. merge trunk/beta) [00:57] for a) you just move the .beta/d.ev branch forward ... when b) happens you need to merge .head etc. [00:57] but i think thats clear ;) [00:57] err planned [00:57] they don't merge, they fork off from trunk, and cherry pick as long as they stick to that branch [00:57] right [00:58] so when they switch dev to a new branch we need to merge from .head ... if a branch moves forward we can just bump changelog in beta/dev branches [00:59] at least i would hope they dont make new trunk forks that lie in the past [00:59] fta: licensecheck.pl doesnt recurse in directories? [00:59] --hel pisnt that helpful ;) [00:59] -r [00:59] hm [01:00] oh, mine? [01:00] perl debian/licensecheck.pl -r build-tree/src/ [01:00] error [01:00] Unknown option: r [01:00] Usage: debian/licensecheck.pl [options] directory -a display all licenses found (default will hide whitelisted licenses) -h this help screen [01:01] i call licensecheck -r --copyright $dir [01:01] so yes, it's recursive [01:02] ok beause you have "filelist" in --help [01:02] oh its just confusion. ok [01:03] doesnt print anything :( [01:03] the bug is moving forward [01:03] wait [01:03] really? [01:03] gyp? [01:03] cool [01:04] if we cojuld upload that on jan 4 that would be a good step ;) [01:04] no [01:04] http://code.google.com/p/chromium/issues/detail?id=28291 [01:05] you should link yours there [01:05] i cannot link even with my underpowers ;) [01:05] cant do nothing bug comment and star ;) [01:06] lol [01:06] which one is yours? [01:06] dunno ;) [01:06] its filed against gyp [01:07] http://code.google.com/p/gyp/issues/detail?id=133 [01:07] done [01:09] fta: so that script buffers all results? [01:09] it consumes cycles [01:09] but doesnt dump anything [01:09] i run that on the armel board fwiw ... so it will take some time to finish ;) [01:09] it's needed to sort the results [01:10] on arm! lol, run that locally, you're crazy [01:11] i am too lazy to extract the source ;) [01:11] also i am on a mini 9 [01:11] fta: how about dumping DEP-5 format ;) [01:11] you already seem to sort that stuff somewhat. [01:12] build-tree/src/third_party/yasm/source/patched-yasm/tools/python-yasm/pyxelator/ [01:12] is UNKNOWN [01:12] DEP-5 format? [01:12] yes. new parsable copyright format [01:13] http://dep.debian.net/deps/dep5/ [01:13] i think using DIR/* for dirs with all files of same license would work [01:13] otherwise listing them explicitly [01:13] by dir [01:13] so the Files: line doesnt get too long ;) [01:14] it's possible, eveything is possible [01:14] in theory you could probably dump all files for each license in one use Files: ;) [01:14] well [01:14] i hoped it would be easy for you ;) [01:14] in fact, that's why i added -a [01:15] i think i might be able to do something ;) [01:15] checked the code [01:15] all i want to know is how to check length of the eys %{$$data{$dir}} [01:15] keys [01:15] in perl [01:16] fta: any hint ;)? [01:16] length("foo") => 3 [01:17] keys %hash returns a list [01:17] length takes a string [01:17] copyright info is probably quite incomplete atm? [01:18] maybe we should just gen the full list ;) [01:18] and not try to merge [01:18] we would need to merge by copyright holder too [01:18] hmm [01:18] i sort by license [01:18] yes. if we now could also sort by copyright holders within each dir that would be perfect [01:19] probably requires some computing, but the files per dir list should be short enough :) [01:20] iirc, i already did that [01:20] i dont think so [01:20] there hsould be another loop [01:20] first extract all copyrights into a unique set [01:21] then filter all files returned for each copyright for each license ;) [01:21] hehe [01:21] i sort by dir, then by license [01:21] then by file [01:21] the 3 for() loops at the end [01:22] yes. but not by copyright holder [01:22] thats just dumped [01:22] having those also sorted below the license level would help a lot and hopefully would allso abunch of dirs to be just * [01:23] then we just need to resort the whole list by dir depth and dep-5 is valid and great ;) [01:24] current copyright is 3.5 M ;) [01:24] i mean the raw dump [01:25] so [01:25] for my $license (sort values %{$$data{$dir}}) [01:25] err [01:25] for my $copyright (sort values %{$$data{$dir}}) [01:25] would give a sorted list of copyright hoders in a dir? [01:31] fta: http://paste.ubuntu.com/349346/ ;)( [01:31] to give the idea ;) [01:31] ignore my unperlism [01:31] maybe help me fix that ;) [01:33] er, what did you change? [01:33] hmm. doesnt like continue ;) [01:34] fta: the copyright sortage in the license loop [01:34] what is the continue equiv in perl? [01:34] next? [01:34] yes, but you have two nested loops over the same thing.. [01:34] hm [01:36] oh [01:36] i wanted values [01:36] not keys [01:36] for my $copyright (sort values %{$$data{$dir}}) { [01:36] that way [01:37] help ... why is there no equiv to continue; ;) [01:37] hm, no, it's a 3 levels hash table [01:37] not sure why next wants a parameter :) [01:37] continue => next, break => last [01:38] yes, but next without an argument hates me :( [01:38] find docs about next FILE; [01:38] not sure what i shoudl put there for a afor ;) [01:38] nope, next alone is fine [01:38] with a ";" [01:38] hmm [01:38] i tried [01:38] while(1) [01:38] it's not python [01:38] next; [01:38] that failed :) [01:38] sure [01:39] i am used to C ;) [01:39] sure, you need braces [01:39] print "foo" if $bar; or if ($bar) { print "foo"; } [01:39] sure [01:40] oh [01:40] you cannot have a one line thing like in C? [01:40] odd [01:40] ok [01:40] yeah ... now it does something ;) [01:41] yeah ... so what is in values? [01:41] my eyes are closing fast [01:41] another map? [01:41] sure [01:41] the ref of a hash [01:41] imho, you're doing it the wrong way [01:42] its definitly supoptimal ;) [01:43] will try something else ;) [01:43] i don't understand what you're trying to do with your loop [01:45] dont bother [01:45] i will play around a bit ;) [01:45] good night :) [01:47] thanks, we'll rediscuss that next time [02:37] dogatemycomputer: why did you mark that bug invalid? [02:38] micahg: I was told if it was reported upstream then it should be marked invalid for the ubuntu project. is that incorrect? [02:38] dogatemycomputer: only for kubuntu packages [02:38] micahg: ohhhh.. I wish this was documented somewhere. :-( [02:38] dogatemycomputer: that's the only place it should say to invalidate :) [02:39] * micahg fixed it [02:39] micahg: I don't think it actually says one way or the other. What should I mark it as? [02:39] dogatemycomputer: once it's upstreamed it should be marked triaged and if there's no importance, it should be set [02:40] micahg: ahhh.. i'll try that then. I don't think I can set importance but I think i can mark it triaged. [02:40] dogatemycomputer: you probably can't do either [02:40] micahg: i'm not in BugSquad.. mainly because i'm still learning. [02:41] but you go into #ubuntu-bugs and ask a -control member to do it for you [02:41] micahg: I think I can mark it 'complete' though.. and past experience says someone will come along and change it to triaged. [02:41] dogatemycomputer: you should join bugsquad while learning [02:42] micahg: honestly.. I like the idea that I can't do something terribly wrong yet. :-) [02:42] dogatemycomputer: bugsquad has no special privs [02:42] micahg: I do plan to join though. Probably in March or April. It says in the docs that I should have some experience before asking to join. [02:42] bugcontrol has special privs [02:43] dogatemycomputer: let's go to #ubuntu-bugs [02:43] micahg: okay.. [02:44] [reed]: can you push something I got approval for? [02:47] <[reed]> micahg: sure [02:47] <[reed]> bug #? [02:47] mozilla 510040 [02:47] Mozilla bug 510040 in JavaScript Debugging APIs "[PATCH] Fix JS debugger crash on 64-bit: don't truncate PC to jsuint in jsds_FilterHook" [Minor,Resolved: fixed] http://bugzilla.mozilla.org/show_bug.cgi?id=510040 [02:48] [reed]: I didn't test the patch against the branch [02:50] <[reed]> micahg: done [02:50] [reed]: thanks === \vish is now known as mac_v === asac_ is now known as asac === mac_v is now known as \vish [10:13] morning [11:20] hi [11:27] hi asac [11:27] ready for the PARTTTYYYY ? [11:30] yeah [11:30] well. ... not really ready. but preparing atm ;) [11:32] ok packing things and then moving to different city for this event ;) [11:33] see you next year, everybody!! 2010!! [11:36] bye asac [11:36] enjoy [11:37] and guud luck entering 2010 === BUGabundo_work is now known as BUGabundo_lunch === BUGabundo_lunch is now known as BUGabundo_work === mac_v is now known as \vish [15:52] asac: fta: http://paste.ubuntu.com/349581/ === \vish is now known as mac_v === mac_v is now known as \vish [16:34] bye guys. see you tomorrow. enjoy your party. i know i will [17:15] something is broken in the way TB 3 replies to the messages. the quote doesn't appear to be properly formatted when i look at it in Gmail [17:18] mbana: and teh same setting worked in TB2? [17:20] yes. i just checked. it only applies when replying in html mode [17:20] the quote is indented as opposed to proper quoting [17:20] mbana: can you file a bug in LP with screenshots? [17:20] if u use gmail, u can see it yourself. [17:21] just reply to a message in TB 3 in HTML form. look at the message in Gmail and unhide the quote [17:21] it's indented and not quoted. [18:06] mbana: I don't have time to test right now === micahg1 is now known as micahg