diffcore-delta.c: update the comment on the algorithm.

The comment at the top of the file described an old algorithm
that was neutral to text/binary differences (it hashed sliding
window of N-byte sequences and counted overlaps), but long time
ago we switched to a new heuristics that are more suitable for
line oriented (read: text) files that are much faster.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
This commit is contained in:
Junio C Hamano
2007-06-28 23:11:40 -07:00
parent 706098af6b
commit af3abef94a

View File

@@ -5,23 +5,20 @@
/* /*
* Idea here is very simple. * Idea here is very simple.
* *
* We have total of (sz-N+1) N-byte overlapping sequences in buf whose * Almost all data we are interested in are text, but sometimes we have
* size is sz. If the same N-byte sequence appears in both source and * to deal with binary data. So we cut them into chunks delimited by
* destination, we say the byte that starts that sequence is shared * LF byte, or 64-byte sequence, whichever comes first, and hash them.
* between them (i.e. copied from source to destination).
* *
* For each possible N-byte sequence, if the source buffer has more * For those chunks, if the source buffer has more instances of it
* instances of it than the destination buffer, that means the * than the destination buffer, that means the difference are the
* difference are the number of bytes not copied from source to * number of bytes not copied from source to destination. If the
* destination. If the counts are the same, everything was copied * counts are the same, everything was copied from source to
* from source to destination. If the destination has more, * destination. If the destination has more, everything was copied,
* everything was copied, and destination added more. * and destination added more.
* *
* We are doing an approximation so we do not really have to waste * We are doing an approximation so we do not really have to waste
* memory by actually storing the sequence. We just hash them into * memory by actually storing the sequence. We just hash them into
* somewhere around 2^16 hashbuckets and count the occurrences. * somewhere around 2^16 hashbuckets and count the occurrences.
*
* The length of the sequence is arbitrarily set to 8 for now.
*/ */
/* Wild guess at the initial hash size */ /* Wild guess at the initial hash size */