/srv/irclogs.ubuntu.com/2018/09/24/#ubuntu-devel.txt

=== cpaelzer_ is now known as cpaelzer
juliankcjwatson, wgrant Regarding delta indexes, I built a fake Deltas index for xenial->xenial-updates with 3 deltas per update, and I landed at 563KB for mains Deltas.xz 829 KB for a Packages.xz12:08
juliankThe fields were Package, Old-ID, New-ID, Size, SHA25612:08
juliankNot convinced12:08
juliankI wonder if I should "just" inject SHA256 of complete debs in the dpkg status database12:10
juliankand then I can save one hashsum12:11
juliankbecause new-id = sha256 of delta12:11
juliankum no12:11
juliankI can however safe one ID, by using SHA256(old id | new id) as a field12:12
juliankbrings us down to 485 KB12:12
juliankSo, with Deltas index files, we'd be looking at 60% update time increase for a 80% install download-time decrease12:14
juliankThere was in fact the idea to have "smart" delta indexes once12:14
juliankSo update would download Packages files, then calculate upgrade download size12:14
juliankand if size(Delta indexes) < some% of upgrade size, it would fetch Delta indexes and look for deltas12:15
juliankoh, we need 4 deltas per update12:15
juliankthis does not scale well12:15
julianksizes of Deltas.xz ~ Packages.xz12:16
juliankWondering if a bloom filter might be worht it12:17
juliankSo update gets a bloom filter file for which updates have deltas12:18
juliank"might have deltas"12:18
juliankto reduce the number of fetches during install/upgrade12:18
juliankwhen doing a per-source-package signature12:18
juliankA SHA256 has 16 16 bit values we can use as hashes12:22
juliankwe then just need 2**16 bytes for the filter12:22
juliankxenial-updates needs 12.3 KB bloom filter for a 0.75% false positive rate12:56
julianksounds OK12:56
juliank1.5% increase in update time to reduce failing delta lookups by about 40% or so12:58
sladenjuliank: this sounds exciting.  Is there more information somewhere?14:14
julianksladen: Yes, at https://debconf18.debconf.org/talks/66-delta-upgrades-revisited/, https://people.debian.org/~jak/dc18-deltas.pdf for example14:15
julianksladen: unless you mean the bloom filter specifically14:15
juliankthat just popped in today14:15
sladenjuliank: one possibly optimisation of this to provide N diff scripts, and 1x literal data per file.  This would allow one deltadeb to match a scalable number of the last $N package versions14:34
sladenjuliank: so for the moment you could just ship a 1:1 upgrade mapping, but it leaves open the possiblity for future optimisation14:35
julianksladen: complicated14:35
juliankI mean, script is the wrong word14:35
juliankI'm not sure where you'd integrate this14:36
juliankon the ddelta level?14:36
juliankAdd a CRC32 of input data to each command, and then provide alternating blocks?14:37
* sladen re-finds https://www.uknof.org.uk/uknof6/Sladen-Delta-upgrades.pdf from a decade ago. 14:37
sladenjuliank: https://people.debian.org/~jak/dc18-deltas.pdf page 8,  each file has 1x "diff data" and 1x "extra data"14:38
juliankYes14:38
sladenjuliank: the suggestion would be that each file has Nx "diff data" and 1x "extra data".14:38
juliankAn there will be a CRC for diff data eventually (for the part we are adding the data too)14:38
juliankSo you could reasonably have an "or flag", too14:39
sladenjuliank: where the "bitmask" for the "extra data" can be expanded to cover the last N versions of a package14:39
juliankBut I think it gets too slow14:39
* sladen looks confused14:39
juliankThe problem is simple: If we have multiple diff data, we need to figure out which one to use14:39
juliankfor that we need to read the block / try to apply it14:40
juliankif we fail, we'd have to seek back and start again14:40
juliankunless you make the choice first14:40
sladenper file, there is {1x input file (optionally validated by some hash), 1x "extra data", and 1x "diff data"}14:40
sladenthis gives a there is a one-way mapping   X+Y [transform via "diff data"] => Z14:41
juliankyou're still missing the point14:42
* sladen listens14:42
juliankIf I have n diffs for a given file14:42
juliankhow do I figure out which one to apply14:42
sladenswitch the question around14:42
sladenif I have a starting input file with hash 0xdeadbeef, which diff do I *choose* to reach the end point14:43
juliankyou don't know anything about the file until you have started applying the delta14:43
sladen(one assumes that you're already validating the input before processing it ;-)14:43
sladenjuliank: if you "don't know anything about the file", how did you find it on disk?14:44
juliankI did not14:44
juliankThat's a different layer14:45
juliankThe real delta layer can only apply deltas given an old file14:45
sladenyes14:45
juliankOn top of that there's the tree delta layer, which figures out the old name and the new one and stuff like that14:46
juliankit's like literally a tarfile and a header with "old name" in it14:46
juliankBut of course14:46
juliankwe can just say "delta id 5 inside here is the delta for that combination"14:47
juliankthat said, you then increase the problem of finding the delta on the server14:48
* sladen raises an eyebrow14:48
juliankyour file name needs to identify to which upgrade the delta belongsd14:49
juliankthen you end up with pkg_id1_id2_id3_ddelta.deltadeb or something14:49
juliankI'm not sure on the exact details yet14:50
juliankthe latest draft stage is at https://lists.debian.org/debian-dpkg/2018/09/msg00019.html14:50
juliankand even that is somewhat out of data14:50
sladenjuliank: perhaps think of it a different way.  Lets say we have a package that is regularly (once a day updated), say the hypothetical "message-of-the-day.deb"14:50
juliankas pkg_$old_$new_$algo.deltadeb might become $pkg_$hash($old|$new)_$algo.deltadeb14:51
juliank64 bytes less per delta14:51
sladenjuliank: only '/etc/motd' is changed in this package.  the other files, such as /usr/share/message-of-the-day/README remain unchanged14:51
sladenjuliank: for the last 5 versions, N .deltadebs are currently required, which at 98% the same.14:53
* sladen reads https://lists.debian.org/debian-dpkg/2018/09/msg00019.html14:54
juliankI fully understand the argument for that approach14:54
juliankI don't think it's easily doable given the requirements14:54
juliankThe requirements for deltas are (1) min. seeks on old file (2) no seeks on delta (3) no seeks on new file14:55
juliankIf you want each block to store $n$ diff data for $n$ old versions (or store $n$ control blocks for $n$ old versions)14:56
juliankyou can easily do that14:56
juliankjust assign indexes to them and map indexes to old packages in the control.tar14:57
juliankbut that's not elegant14:57
juliankTo get rid of that, you have to make the decision which diffs to pick in the delta itself14:57
juliankand that gets nasty.14:57
sladencan this avoid duplicating the common parts of new literal data ("extra data")?14:57
julianksladen: Not sure, but duplication does not matter much14:58
juliankit's a very tiny overhead after zstd/xz compression14:58
juliankyou could infact just include $n$ deltas I'd think for $n$ old versions14:58
sladenahhhh, okay it relies on the enormous window size of zstd/xz/etc14:59
juliankNow, what will happen is that we get CRC32 soon for the "old" data we are adding diffs to, to ensure we are patching with the correct file15:00
julianks/ensure/increase confidence/15:01
sil2100!dmb-ping15:01
ubottucyphermox, jbicha, micahg, rbasak, sil2100, slashd, tsimonq2: DMB ping.15:01
juliankSo my idea was that we could give each of these (diff, extra, seek) control triplets an "or" flag15:01
cyphermoxo/15:01
juliankif the CRC does not match, we go to the next control entry15:01
juliankthat works, but it's somewhat inefficient15:02
juliankbecause the diff block might be MBs large15:02
juliankit also does not work after all, because we'd have to undo the write to the new file15:02
juliank(i.e. seek back and truncate)15:03
juliankbut: Both the patch and the new file are a pipe15:03
juliank(dpkg reads the patched file from the pipe, and calculates hashes before storing it on the system)15:04
juliankYou can solve that by just picking a fixed maximum window size you can keep in memory15:05
juliankbut um, for large files that slows things down a bit15:05
juliankthat said, for really big files, using windows during creation is a lot more effective15:06
juliankI think  bsdiff generation is O(n^2) atm15:07
juliankif you use blocks of $B$ bytes, it becomes (n/B*B^2)15:07
juliankso it only grows linearly rather than quadratic15:07
sladenjuliank: in the msg0019 spec,  "if file.mtime >= installed_version.install_time or file.mtime > file.ctime or file.size != expected_size"  <-- probably a lot better to use a checksum to match before ingesting15:07
sladenjuliank: and once a checksum is used, that can become the primary matching key15:08
julianksladen: that's the paranoid mode15:08
juliankour believe is that checksums are too slow to be practical here, and useless in most situations15:08
juliankwhere us is me and some other people I talked to about this15:10
juliank:)15:10
sladencheckums are really slow if done on the server (rsync), but checksums on clients are $fast (cost of checksum is > than cost of downloading precious bytes)15:10
sladencheckums are really slow if done on the server (rsync), but checksums on clients are $fast (cost of checksum is < than cost of downloading precious bytes)15:10
juliankWhat you already have checked is that the files, when unpacked, have the right content15:10
juliankbecause of the IDs identifying the content of the package, and you looking up deltas based on ids15:11
juliankI'm unsure about the implications of actually hashing the files15:11
juliankwe will be reading them with a 60% probability15:12
juliankbecause we'll be applying a delta15:12
juliankand dpkg already hashes the files it is writing to the disk15:12
juliankso the overhead should be less than 50%15:12
juliankbut let's measure15:13
juliankI spent 47 seconds checking my 3326 packages15:14
juliankon a fast Samsung SSD from this year15:14
juliankthis means that for an upgrade of 300 packages, I'd spent about 5 seconds15:15
juliank*35 seconds with caches15:15
sladenthat's still presumably 99% disk I/O15:16
juliankyes15:20
juliankthe non-paranoid mtime check only takes 1s15:20
sladenthat's not reading/checking any files, only filesystem structure15:21
juliankthat's the point15:21
juliankwe can be reasonably sure that the file has not been modified15:21
juliankbecause it requires extreme stupidness or deviousness to change your mtime back to before install time15:22
juliankor even build time15:22
juliankWhen does that happen?15:24
juliankWell, I'd say only if your clock is behind a few days/months, you have installed the package with the correct time, and you modify the file while keeping the bit length the same15:25
sladencheck mtime sounds like an optimisation that an upgrade is /probably/ possible.  -> fetch set of hash(es) is see whether it is really truely possible15:25
juliankbut we can protect against clock resets too I think15:25
juliankI think15:26
sladenjuliank: is there a directory somewhere, with an extra set of .deltadebs to look at?15:26
sladens/extra/example/15:26
juliankno15:27
juliankhttps://github.com/julian-klode/ddelta/tree/demo has a delta generation script15:27
juliank./build-delta.sh $old $new $delta15:27
juliankbuilds a delta from $old to $new and stores it as $delta15:27
juliank* delta deb15:28
juliankit's very primitive15:28
juliankone thing it does is generate normal .deb with debian-binary instead of files with debian-delta15:30
juliankbut minor details15:30
juliankease of prototyping15:30
jibelinfinity, hi, could you have a  look at bug 1794137 ?15:30
ubottubug 1794137 in ubiquity (Ubuntu) "ubiquity FTBFS on Cosmic" [Undecided,New] https://launchpad.net/bugs/179413715:30
julianksladen: I think windowing would definitely be worth it, the memory requirements go down a lot, effectively become constant. as https://github.com/julian-klode/ddelta/commit/05aecc28c2f049cad83d9c7ef2e2439d0ecae295 mentions15:31
juliankbut I think fixed windows of 8 MB should not work in practice15:31
juliankyou need some heuristics15:32
julianksladen: If you want to compare with xdelta3 as the algorithm, just use ddelta_generate="xdelta3 -s" instead15:45
sladenjuliank: where's the customised version of bsdiff()  (specifically the search() function)15:49
juliankIt's in that repo15:49
juliankhttps://github.com/julian-klode/ddelta/blob/master/ddelta_generate.c15:49
juliankit's mostly just Google's bsdiff fork plus some cleanup15:51
juliankand without large file support15:51
juliank(we don't really need large file support since we are delating individual files, and it reduces memory usage by about 50%)15:53
juliankand well, memory usage with LFS is 9m + n15:55
juliankso assuming you actually need large files >= 4 GB15:55
juliankyou end up with at least 36 GB of memory requirements15:55
juliankunless you use the windowed branch then you only need 50 MB (or well, 6w for window size w)15:56
sladenwell, some (variable) window should probably be used, and that should be stated in the debdelta15:57
sladenso that eg. an old ARM/whatever maschine with 16MB of RAM can use to choose to only look at deltadebs that declare as only using a 1MB window15:58
sladenand if not, download everything15:58
juliankmemory use when applying is already constant15:58
juliankthe memory requirement here is when generating15:59
julianknot counting stdio, memory use is 64 KB of stack memory for the data blocks16:00
juliank+ a few bytes here and there, and three levels of recursion or so16:00
juliankthere are no heap allocations in the program (except for whatever libc does)16:01
juliankI suggest reading the ddelta_apply.c (https://github.com/julian-klode/ddelta/blob/master/ddelta_apply.c), it's quite easy to follow16:01
juliankthere's some wicked vectorization magic in apply_diff(), but apart from that, it's quite easy16:03
juliankthe be64toh is a bit ugly too16:04
sladenjuliank: this stuff is drastically simplier than what was being looked at ~10 years ago, which failed because of a desire to bit-for-bit recreate the target .deb; and the checksums to prove that were only available on the recompressed .deb, rather than purely the contents17:37
sladenjuliank: there did not seem to be an appetite for writing directly to the filesystem, partly because of the inability to roll back to the previous state on failure17:38
sladenjuliank: is there an appetite for directly writing, verses, re-creating the .deb and getting dpkg -i to apply that?17:40
julianksladen: the rollback ability is exactly the same as for out of space errors and roughly the same as for failing preinsts17:45
juliankAfter all, we apply the delta not to the file itself, but write the .dpkg-new file as before17:46
juliankso if some delta fails, dpkg will revert precisely as it does with other unpack failures17:46
juliankOr rather try to revert17:46
juliankthat is, it deletes the .dpkg-new files and calls some maintainer scripts which hopefully work17:47
juliankWith some modifications to dpkg and apt, it should be possible to make apt download the full deb if installing the delta fails17:48
juliankFor that, dpkg needs to communicate a failing unpack due to delta17:48
juliankvia status fd17:48
juliankapt needs to interpret that and acquire the full thing17:49
julianksladen: Also, the algorithm does support regenerating the .deb file bit-by-bit17:49
juliankwell, except for compressor changes, if any17:50
juliankBut the problem with regenerating the full deb is that you end up with a lot of space usage17:51
juliankthere is the intermediate step where you regenerate the tar while you're piping it to dpkg17:51
persiajuliank: most compression tools can be used reproducibly with some set of flags.17:51
juliankpersia: not really17:52
juliankWell, for a given point in time, they can17:52
juliankBut let's compress our deb now, and then try to regenerate it 5 years later17:52
persiajuliank: I meant gz, xz, bz, etc., for which time is less important.  For tar, etc. things like "--mtime @$$SOURCE_DATE_EPOCH" work.  5 years is trickier.  Anyway, not important.17:55
juliankpersia: I think pristine-gz ships like 2 copies of different gzip versions now to enable that use case :)17:56
persia:(17:56
sladentheoretically one needs to know every choice made by the encoder, and the problem is $impossible.  In practice, it can be achieved in 99% of cases, but 90% of those of gzip -9, and there is code around that eg. has snapshots of the different versions of bzip217:57
juliankyeah, like pristine-gz17:57
juliank:)17:57
juliankNow you can do bitwise regeneration of most debs17:58
juliankbut as they use xz it's quite slow17:58
juliank...17:58
juliankIf you don't care about bitwise, then you can go nuts17:58
julianke.g. regenerate full debs without compression17:58
juliankor with zstd -1 if we do zstdf17:58
juliankwhich would not really be noticeable17:58
sladenwhat should be better (and is the same case as 10 years ago), would be to store the hash of the uncompressed tar.deb  then the compression algorithm becomes irrelevant (and can be changed as required)17:59
juliankyou can do that, yeah17:59
sladenjuliank: somewhere, there is the 'apt-sync' / 'apt-zsync' package code, with provides an APT-Method using zsync.  It should be possible to copy and adapt that and have a more rounded and working demonstration system18:00
juliankalso, tardelta (or deltatar; aka a tarball of my deltas) is very easy to regenerate.18:00
julianksladen: Such stuff does not work18:00
persiaProbably want to store multiple hashes (maybe the same set as in dsc files) to make it harder to game.18:00
sladenjuliank: why not?18:00
julianksladen: You cannot use rsync/zsync algorithms for binaries18:01
juliankalso, debs are not built rsyncable18:01
juliankNow the first argument is a bit complicated, but should be easy to understand18:01
juliankIf you add a few bytes at the beginning of your file, all offsets change by that amount18:02
juliankthe entire file becomes unsyncable18:02
sladenjuliank: in this case, it is the apt-sync *APT method* that might be usable to get a more complete demonstration of this proposed *debdelta* system working18:02
juliankI have a working dpkg that can install deltas directly, and I played around with that18:02
sladenjuliank: thus making the whole proposed *debdelta* system more denonstratable18:03
juliankbut yes, you can also builod a method that reconstructs debs18:03
juliankbut we also need a repository layout anyway18:03
julianka PoC would not have any signature checks I think18:03
juliankalthough18:03
juliankthe method could do that itself18:04
sladenjuliank: [regarding zsync]  the gzip --rsyncable is about repeatedly restarting the gzip stream, limiting the window size, in order to better re-use existing literal data.  In the proposed *debdelta*, that literal data is provided in the "extra data" block anyway.  This already presumes that server-side disk space is no longer an issue18:07
sladenif server-side disk-space is indeed no longer a issue, then this makes the zsync side of stuff a lot easier (it becomes zsync minus the z); in addition to the compressed .deb, store an uncompressed .deb  on the server, and allow HTTP/1.1 Range: Partial-Content access to this for missing pieces not available in the "extra data" (which could now be zero-length)18:09
julianksladen: rsyncable was about rsync/zsync18:10
juliankobviously18:10
juliank2nd, the name is not debdelta18:10
juliank3rd, server-side disk space is an issue18:10
sladendeltadeb?18:10
julianksladen: yeah, for now, but it's super confusing18:11
* sladen happy to use any name preferred18:11
juliankwould have called them ddebs, but we already have those :(18:11
juliankpdeb was in the round as well (patch deb)18:12
juliankdebdiff and debpatch are both used as well18:12
julianknaming software is *hard*18:12
ogracall it "frank"18:13
juliankI could call it voyager18:13
juliankafter all, related to delta18:13
juliankwell, the delta quadrant18:14
ogra:D18:14
sladenjuliank: --rsysncable does two (unrelated) things.  (1) resets the encoding/compression state on a particular input (zeros in the input stream);  (2) more reset points in the stream where it can be entered without knowing state [back-reference window state + current Huffman tree in use].18:32
juliankI don't care18:33
sladenone *can* jump into the stream at any point if the current Huffman table is known, and the contents of the backreference window is available18:33
juliankHow's that relevant to the topic of deltas?18:34
sladenchosing whether, and how much literal data to duplicate, vs. trying to fetch the literal data from the original .deb18:35
sladen(server side disk space vs. bandwidth tradeoffs)18:35
joelkraehemannhi all18:48
xnox@help20:42
udevbot_(help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.20:42
xnox@help pilot20:42
udevbot_(pilot (in|out)) -- Set yourself an in or out of patch pilot.20:42
xnox@pilot in20:42
=== udevbot_ changed the topic of #ubuntu-devel to: Archive: Feature Freeze (Cosmic Cuttlefish) | 18.04 released | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Bionic | If you can't send messages here, authenticate to nickserv first | Patch Pilots: cyphermox, xnox
xnoxcyphermox, i think that keeping the patch-pilot name should be fine to be honest =)20:42
xnox@pilot out20:42
=== udevbot_ changed the topic of #ubuntu-devel to: Archive: Feature Freeze (Cosmic Cuttlefish) | 18.04 released | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Bionic | If you can't send messages here, authenticate to nickserv first | Patch Pilots: cyphermox
cyphermoxxnox: *shrugs* either way. I was asking, since I figured only dholbach (or very few people) knew where the code was at all20:43
cyphermox@pilot out20:43
=== udevbot_ changed the topic of #ubuntu-devel to: Archive: Feature Freeze (Cosmic Cuttlefish) | 18.04 released | Devel of Ubuntu (not support) | Build failures: http://qa.ubuntuwire.com/ftbfs/ | #ubuntu for support and discussion of Trusty-Bionic | If you can't send messages here, authenticate to nickserv first | Patch Pilots:
xnoxcyphermox, right, but i only had the calendar code, not the irc-bot codes. and i think the feedback was that calendar code is not wanted.20:48
cyphermoxI don't know21:31
cyphermoxsoem people like being scheduled, that way they know to do it at a specific time21:31
cyphermoxbut I guess that works better for patch piloting that other things, maybe?21:31
cyphermoxbrb21:31
naccahasenack: congrats!21:32
mwhudsonahasenack: congrats!21:33
joelkraehemannis this channel anyhow related to snapcraft.io?22:32
wxljoelkraehemann: #snappy22:33
joelkraehemannwxl: thank you22:35
=== nyghtmeyr is now known as wxl
stgraberc23:14
stgraberoops23:14
mwhudsonhmm is there an easy way to update to a new upstream with a git ubuntu clone-ed repo?23:34
mwhudsonuupdate is almost right, apart from the way it creates a new repo23:34
mwhudsongbp import-orig similarly almost but not quite what you want23:34
naccmwhudson: yeah, there's not a great way23:40
nacci think we have a bug for it23:40
naccmwhudson: here's what i've done in the past, it's not ideal23:40
naccmwhudson: clear out all non-debian/ directories and files from the git-ubuntu repo, move all the uupdate'd repo's files over23:41
naccmwhudson: `git add .` (should just be non-debian changes)23:41
naccmwhudson: insert a changelog entry (you can use the topmost from the uupdate'd one as a template for the version)23:42
naccmwhudson: i *think* that mostly works, that's how i would do the php updates by hand23:42
mwhudsonah yeah23:42
mwhudsoni am reading how gbp import-orig is implemented now :)23:42
naccand i imagine is actually similar to what gbp does, with probably more smarts.23:42
naccit's basically 'stash debian, update everything, unstash debian)23:42
nacc(in my mind)23:42
mwhudsonit creates a new tree out of the tarball contents, commits that, then mashes the debian dir from the packaging branch into it23:43
naccyeah, that make sense23:43
mwhudson(with vaguely appropriate hashes as commit parents)23:43
naccwe could do something similar, tbh, we have git-tree representations of any directory (or can) and can do temporary directory things)23:43
mwhudsoni have an implementation of gbp's debian directory mashing in shell23:44
mwhudsonbut not the other bit23:44
mwhudsonhttps://paste.ubuntu.com/p/qfd6wVksDy/23:44
naccthere's LP: #164994023:44
ubottuLaunchpad bug 1649940 in usd-importer "can't prepare new upstream releases using gbp" [Medium,Confirmed] https://launchpad.net/bugs/164994023:45
naccand LP: #170621923:45
ubottuLaunchpad bug 1706219 in usd-importer "`git ubuntu upstream-update` wrapper around uupdate/uscan" [Wishlist,Confirmed] https://launchpad.net/bugs/170621923:45
nacci had some scaffolding in a branch to do the latter23:45
nacci think it's a 'future' item, because we are still stabilizing the importer ABI23:45
mwhudsonit's slightly frustrating that gbp has all the bits required for this23:47
mwhudsonbut can't actually do it23:47
naccmwhudson: yeah, gbp has some very specific branch requirements, iirc. We were trying to avoid coding too much of that policy yet23:48
rbasakYou can hack it by messing around with GIT_WORK_TREE23:52
rbasakUnpack the tarball somewhere23:52
naccah good point23:52
naccyeah, that's what my scaffolding started to do23:52
naccbasically, did uupdate in a specific place23:52
naccand took that working tree23:52
rbasakGIT_WORK_TREE=$there git add -A23:52
naccyep23:52
naccrbasak: could you spit that into one of the two bugs?23:52
rbasakThen some messing around with "git reset debian" (I'd have to think about it exactly)23:53
mwhudsonheh ok i have half a shell script for this now too23:53
naccrbasak: right, that was about where i got23:54
naccyeah, you want to commit changes to non-debian as part of the uupdate23:54
rbasakDone23:54
naccbut it would be a nice feature :)23:54

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!