A previous commit introduced a new parse_one_advertised_remote()
function that takes a `const char *` argument. This function is called
from filter_promisor_remote() and parses all the fields for one remote.
This means that in filter_promisor_remote() we no longer need to split
the remote information that will be passed to
parse_one_advertised_remote() into an array of relatively heavy and
complex `struct strbuf`.
To use something lighter, let's then replace strbuf_split_str() with
string_list_split() in filter_promisor_remote() to parse the remote
information that is passed to parse_one_advertised_remote().
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a follow up commit we are going to parse more fields, like a filter
and a token, coming from the server when it advertises promisor remotes
using the "promisor-remote" capability.
To prepare for this, let's refactor the code that parses the advertised
fields coming from the server into a new parse_one_advertised_remote()
function that will populate a `struct promisor_info` with the content
of the fields it parsed.
While at it, let's also pass this `struct promisor_info` to the
should_accept_remote() function, instead of passing it the parsed name
and url.
These changes will make it simpler to both parse more fields and access
the content of these parsed fields in follow up commits.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A previous commit started to define `promisor_field_filter` and
`promisor_field_token`, and used them instead of the
"partialCloneFilter" and "token" string literals.
Let's do the same for "name" and "url" to avoid repeating them
several times and for consistency with the other fields.
For skipping "name=" or "url=" in advertisements, let's introduce
a skip_field_name_prefix() helper function to keep parsing clean
and easy to understand.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
For now the "promisor-remote" protocol capability can only pass "name"
and "url" information from a server to a client in the form
"name=<remote_name>,url=<remote_url>".
To allow clients to make more informed decisions about which promisor
remotes they accept, let's make it possible to pass more information
by introducing a new "promisor.sendFields" configuration variable.
On the server side, information about a remote `foo` is stored in
configuration variables named `remote.foo.<variable-name>`. To make
it clearer and simpler, we use `field` and `field name` like this:
* `field name` refers to the <variable-name> part of such a
configuration variable, and
* `field` refers to both the `field name` and the value of such a
configuration variable.
The "promisor.sendFields" configuration variable should contain a
comma or space separated list of field names that will be looked up
in the configuration of the remote on the server to find the values
that will be passed to the client.
Only a set of predefined field names are allowed. The only field
names in this set are "partialCloneFilter" and "token". The
"partialCloneFilter" field name specifies the filter definition used
by the promisor remote, and the "token" field name can provide an
authentication credential for accessing it.
For example, if "promisor.sendFields" is set to "partialCloneFilter",
and the server has the "remote.foo.partialCloneFilter" config
variable set to a value, then that value will be passed in the
"partialCloneFilter" field in the form "partialCloneFilter=<value>"
after the "name" and "url" fields.
A following commit will allow the client to use the information to
decide if it accepts the remote or not. For now the client doesn't do
anything with the additional information it receives.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a following commit, we will use the new 'promisor-remote' protocol
capability introduced by d460267613 (Add 'promisor-remote' capability
to protocol v2, 2025-02-18) to pass and process more information
about promisor remotes than just their name and url.
For that purpose, we will need to store information about other
fields, especially information that might or might not be available
for different promisor remotes. Unfortunately using 'struct strvec',
as we currently do, to store information about the promisor remotes
with one 'struct strvec' for each field like "name" or "url" does not
scale easily in that case. We would need one 'struct strvec' for each
new field, and then we would have to pass all these 'struct strvec'
around.
Let's refactor this and introduce a new 'struct promisor_info'.
It will only store promisor remote information in its members. For now
it has only a 'name' member for the promisor remote name and an 'url'
member for its URL. We will use a 'struct string_list' to store the
instances of 'struct promisor_info'. For each 'item' in the
string_list, 'item->string' will point to the promisor remote name and
'item->util' will point to the corresponding 'struct promisor_info'
instance.
Explicit members are used within 'struct promisor_info' for type
safety and clarity regarding the specific information being handled,
rather than a generic key-value store. We want to specify and document
each field and its content, so adding new members to the struct as
more fields are supported is fine.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In git.git, commit 5309c1e9fb (Makefile: set default goals in
makefiles, 2025-02-15) touched two Makefiles in the git-git/ directory.
Import these changes, so that the trees can converge again with the
next merge of this repository into git.git.
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
* js/ask-yesno:
git-gui--askyesno (mingw): use Git for Windows' icon, if available
git-gui--askyesno: allow overriding the window title
git gui: set GIT_ASKPASS=git-gui--askpass if not set yet
git-gui: provide question helper for retry fallback on Windows
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
When a client performs a fetch or clone they can optionally send "have"
lines to tell the server which objects they already have available
locally. These object IDs are stored by the server in an object array so
that it can remember any objects it doesn't have to include in the pack
sent to the client.
While there isn't any reason to do so, clients are free to send the same
"have" line repeatedly. git-upload-pack(1) already knows to handle this
well: every commit it has seen via a "have" line gets marked with the
`THEY_HAVE` flag, and if such a commit is seen repeatedly we know to not
process it another time. This also has the effect that we only store the
object ID once, only, in the `have_obj` array.
There is an edge case though: if the client sends an object ID that does
not refer to a commit we neither store nor check the `THEY_HAVE` flag.
This means that we repeatedly store the same object ID in our `have_obj`
array, with two consequences:
- In protocol v2 we deduplicate ACKs for commits, but not for any
other objects as we send ACKs for every object ID in the `have_obj`
array.
- The `have_obj` array can grow in size indefinitely with both
protocols.
The potentially-more-serious issue is the second one, as we basically
have a way for an adversary to allocate arbitrarily large buffers now.
Ultimately, this doesn't seem to be all that serious though: on my
machine, the growth of that array is at around 4MB/s, and after roughly
five minutes I was only at 1GB RSS. So this is concerning, but only
mildly so.
Fix this bug by storing the `THEY_HAVE` flag independent of the object
type so that we don't store duplicate object IDs in `have_obj` anymore.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor tests to follow modern best practices:
- Merge together tests that set up and verify a single use case.
- Drop empty newlines at the beginning and end of test bodies.
- Don't change directories in the main test body.
- Remove an unused `D` variable.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The write_midx_internal() method uses gotos to jump to a cleanup section to
clear memory before returning 'result'. Since these jumps are more common
for error conditions, initialize 'result' to -1 and then only set it to 0
before returning with success. There are a couple places where we return
with success via a jump.
This has the added benefit that the method now returns -1 on error instead
of an inconsistent 1 or -1.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Remove the remaining signed comparison warnings in midx-write.c so that
they can be enforced as errors in the future. After the previous change,
the remaining errors are due to iterator variables named 'i'.
The strategy here involves defining the variable within the for loop
syntax to make sure we use the appropriate bitness for the loop
sentinel. This matters in at least one method where the variable was
compared to uint32_t in some loops and size_t in others.
While adjusting these loops, there were some where the loop boundary was
checking against a uint32_t value _plus one_. These were replaced with
non-strict comparisons, but also the value is checked to not be
UINT32_MAX. Since the value is the number of incremental multi-pack-
indexes, this is not a meaningful restriction. The new die() is about
defensive programming more than it being realistically possible.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
midx-write.c has the DISABLE_SIGN_COMPARE_WARNINGS macro defined for a
few reasons, but the biggest one is the use of a signed
preferred_pack_idx member inside the write_midx_context struct. The code
currently uses -1 to indicate an unset preferred pack but pack int ids
are normally handled as uint32_t. There are also a few loops that search
for the preferred pack by name and those iterators will need updates to
uint32_t in the next change.
For now, replace the use of -1 with a 'NO_PREFERRED_PACK' macro and an
equality check. The macro stores the max value of a uint32_t, so we
cannot store a preferred pack that appears last in a list of 2^32 total
packs, but that's expected to be unreasonable already. Furthermore, with
this change we end up extending the range from 2^31 possible packs to
2^32-1.
There are some careful things to worry about with initializing the
preferred pack in the struct and using that value when searching for a
preferred pack that was already incorrect but accidentally working when
the index was initialized to zero.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The incremental mode of writing a multi-pack-index has a few extra
conditions that could lead to failure, but these are currently
short-ciruiting with 'return -1' instead of setting the method's
'result' variable and going to the cleanup tag.
Replace these returns with gotos to avoid memory issues when exiting
early due to error conditions.
Unfortunately, these error conditions are difficult to reproduce with
test cases, which is perhaps one reason why the memory loss was not
caught by existing test cases in memory tracking modes.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This instance of setting the result to 1 before going to cleanup was
accidentally removed in fcb2205b77 (midx: implement support for writing
incremental MIDX chains, 2024-08-06). Build upon a test that already deletes
a packfile to verify that this error propagates to full command failure.
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fill_packs_from_midx() method was refactored in fcb2205b77 (midx:
implement support for writing incremental MIDX chains, 2024-08-06) to
allow for preferred packfiles and incremental multi-pack-indexes.
However, this led to some conditions that can cause improperly
initialized memory in the context's list of packfiles.
The conditions caring about the preferred pack name or the incremental
flag are currently necessary to load a packfile. But the context is
still being populated with pack_info structs based on the packfile array
for the existing multi-pack-index even if prepare_midx_pack() isn't
called.
Add a new test that breaks under --stress when compiled with
SANITIZE=address. The chosen number of 100 packfiles was selected to get
the --stress output to fail about 50% of the time, while 50 packfiles
could not get a failure in most --stress runs.
The test case is marked as EXPENSIVE not only because of the number of
packfiles it creates, but because some CI environments were reporting
errors during the test that I could not reproduce, specifically around
being unable to open the packfiles or their pack-indexes.
When it fails under SANITIZE=address, it provides the following error:
AddressSanitizer:DEADLYSIGNAL
=================================================================
==3263517==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000027
==3263517==The signal is caused by a READ memory access.
==3263517==Hint: address points to the zero page.
#0 0x562d5d82d1fb in close_pack_windows packfile.c:299
#1 0x562d5d82d3ab in close_pack packfile.c:354
#2 0x562d5d7bfdb4 in write_midx_internal midx-write.c:1490
#3 0x562d5d7c7aec in midx_repack midx-write.c:1795
#4 0x562d5d46fff6 in cmd_multi_pack_index builtin/multi-pack-index.c:305
...
This failure stack trace is disconnected from the real fix because the bad
pointers are accessed later when closing the packfiles from the context.
There are a few different aspects to this fix that are worth noting:
1. We return to the previous behavior of fill_packs_from_midx to not
rely on the incremental flag or existence of a preferred pack.
2. The behavior to scan all layers of an incremental midx is kept, so
this is not a full revert of the change.
3. We skip allocating more room in the pack_info array if the pack
fails prepare_midx_pack().
4. The method has always returned 0 for success and 1 for failure, but
the condition checking for error added a check for a negative result
for failure, so that is now updated.
5. The call to open_pack_index() is removed, but this is needed later
in the case of a preferred pack. That call is moved to immediately
before its result is needed (checking for the object count).
Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
All callers of clear_alloc_state() immediately free what they
cleared, so currently it does not hurt anybody that the
alloc_state is left in an unreusable state, but it is an
error-prone API. Replace it with a new function that clears but
in addition frees the structure, as well as NULLing the pointer
that points at it and adjust existing callers.
As it is a moral equivalent of FREE_AND_NULL(), except that what it
frees has internal structure that needs to be cleaned, allow the
helper to be called twice in a row, by making a call with a pointer
to a pointer variable that already is NULLed.
While at it, rename allocate_alloc_state() and name the new
function alloc_state_free_and_null(), to follow more closely the
function naming convention specified in the CodingGuidelines
(namely, functions about S are named with S_ prefix and then
verb).
Signed-off-by: ノウラ | Flare <nouraellm@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Expose the expected type of the second parameter of extend_abbrev_len()
instead of casting a void pointer internally. Just a single caller
passes in a void pointer, the rest pass the correct type. Let the
compiler help keeping it that way.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The flag `--show-object-format` from git-rev-parse is used for
retrieving the object storage format. This way, it is used for
querying repository metadata, fitting in the purpose of git-repo-info.
Add a new field `objects.format` to the git-repo-info subcommand
containing that information.
Mentored-by: Karthik Nayak <karthik.188@gmail.com>
Mentored-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Other Git commands that have nul-terminated output (e.g. git-config,
git-status, git-ls-files) have a flag `-z` for using the null character
as the record separator.
Add the `-z` flag to git-repo-info as an alias for `--format=nul`,
making it consistent with the behavior of the other commands.
Mentored-by: Karthik Nayak <karthik.188@gmail.com>
Mentored-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Lucas Seiki Oshiro <lucasseikioshiro@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before we were silently skipping all builtins that don't have a matching
.adoc file. This is overly loose and might skip documentation files
when it shouldn't, for example when there was a typo in the filename.
To ensure no new builtins are added without documentation, add an
allowlist: t0450/adoc-missing. In this file only builtin commands that
do *not* have a corresponding .adoc file shall be listed. If there is a
mismatch, fail the test. This should force future contributions to
either add an .adoc, or add the builtin name to the allowlist file.
Signed-off-by: Toon Claes <toon@iotcl.com>
[jc: squashed Patrick's "missing file fix" in]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The documentation incorrectly referred to the extension without an 's'.
This fixes the typo for clarity.
Signed-off-by: Mikhail Malinouski <m.l.malinouski@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Depth computation can end early if all remaining commits are flagged.
The current code determines if that's the case by checking all queue
items each time it dequeues a flagged commit. This can cause
quadratic complexity.
We could simply count the flagged items in the queue and then update
that number as we add and remove items. That would provide a general
speedup, but leave one case where we have to scan the whole queue: When
we flag a previously seen, but unflagged commit. It could be on the
queue and then we'd have to decrease our count.
We could dedicate an object flag to track queue membership, but that
would leave less for candidate tags, affecting the results. So use a
hash table, specifically an oidset of commit hashes, to track that.
This avoids quadratic behaviour in all cases and provides a nice
performance boost over the previous commit, 08bb69d70f (describe: use
prio_queue_replace(), 2025-08-03):
Benchmark 1: ./git_08bb69d70f describe $(git rev-list v2.41.0..v2.47.0)
Time (mean ± σ): 855.3 ms ± 1.3 ms [User: 790.8 ms, System: 49.9 ms]
Range (min … max): 853.7 ms … 857.8 ms 10 runs
Benchmark 2: ./git describe $(git rev-list v2.41.0..v2.47.0)
Time (mean ± σ): 610.8 ms ± 1.7 ms [User: 546.9 ms, System: 49.3 ms]
Range (min … max): 608.9 ms … 613.3 ms 10 runs
Summary
./git describe $(git rev-list v2.41.0..v2.47.0) ran
1.40 ± 0.00 times faster than ./git_08bb69d70f describe $(git rev-list v2.41.0..v2.47.0)
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a test script, `t/t1462-refs-exists.sh`, for the `git refs exists`
command.
This script acts as a simple driver, leveraging the shared test library
created in the preceding commit. It works by overriding the
`$git_show_ref_exists` variable to "git refs exists" and then sourcing the
shared library (`t/show-ref-exists-tests.sh`).
This approach ensures that `git refs exists` is tested against the
entire comprehensive test suite of `git show-ref --exists`, verifying
that it acts as a compatible drop-in replacement.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: shejialuo <shejialuo@gmail.com>
Signed-off-by: Meet Soni <meetsoni3017@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In preparation for adding tests for the `git refs exists` command,
refactor the existing t1422 test suite to make its logic shareable.
Move the core test logic from `t1422-show-ref-exists.sh` to
`show-ref-exists-tests.sh` file. Inside this script, replace hardcoded
calls to "git show-ref --exists" with the `$git_show_ref_exists`
variable.
The original `t1422-show-ref-exists.sh` script now becomes a simple
"driver". It is responsible for setting the default value of the
variable and then sourcing the test library.
This structure follows an established pattern for sharing tests and
prepares the test suite for the `refs exists` tests to be added in a
subsequent commit.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: shejialuo <shejialuo@gmail.com>
Signed-off-by: Meet Soni <meetsoni3017@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test file for git-show-ref(1), `t1403-show-ref.sh`, contains a group
of tests for the '--exists' flag. To improve organization and to prepare
for refactoring these tests to be shareable, move the '--exists' tests
and their corresponding setup logic into a self-contained test suite,
`t1422-show-ref-exists.sh`.
This is a pure code-movement refactoring with no change in test coverage
or behavior.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: shejialuo <shejialuo@gmail.com>
Signed-off-by: Meet Soni <meetsoni3017@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As part of the ongoing effort to consolidate reference handling,
introduce a new `exists` subcommand. This command provides the same
functionality and exit-code behavior as `git show-ref --exists`, serving
as its modern replacement.
The logic for `show-ref --exists` is minimal. Rather than creating a
shared helper function which would be overkill for ~20 lines of code,
its implementation is intentionally duplicated here. This contrasts with
`git refs list`, where sharing the larger implementation of
`for-each-ref` was necessary.
Documentation for the new subcommand is also added to the `git-refs(1)`
man page.
Mentored-by: Patrick Steinhardt <ps@pks.im>
Mentored-by: shejialuo <shejialuo@gmail.com>
Signed-off-by: Meet Soni <meetsoni3017@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The GitLab CI runners using Windows machines have realtime monitoring
via Windows Defender enabled by default. This has just now started to
cause issues in our CI jobs using Microsoft Visual Studio:
Program 'meson.exe' failed to run: Operation did not complete successfully because the file contains a virus or
potentially unwanted softwareAt line:356 char:1
+ meson setup build --vsenv -Dperl=disabled -Dbackend_max_links=1 -Dcre ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~.
At line:356 char:1
+ meson setup build --vsenv -Dperl=disabled -Dbackend_max_links=1 -Dcre ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ResourceUnavailable: (:) [], ApplicationFailedException
+ FullyQualifiedErrorId : NativeCommandFailed
The detected issue is more likely than not completely bogus, but it
breaks the jobs.
Fix the issue by disabling realtime monitoring. Besides unbreaking CI,
it also improves our build times a bit:
- Building Git goes from 26 to 22 minutes.
- Executing tests goes from ~1h for one slice of tests to ~30 minutes.
This is still painfully slow, but the issue here is that the Windows
runners on GitLab CI are quite underwhelming overall.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a missed backtick to the end of a code segment so that it will be
rendered like preceding examples.
I deeply appreciate the thoroughness of this documentation. I noticed
the formatting discrepancy reading https://git-scm.com/docs/git-config.
Signed-off-by: Kyle E. Mitchell <kyle@kemitchell.com>
Acked-by: Jean-Noël AVILA <avila.jn@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update the instruction to use of GGG in the MyFirstContribution
document to say that a GitHub PR could be made against `git/git`
instead of `gitgitgadget/git`.
* ds/doc-ggg-pr-fork-clarify:
doc: clarify which remotes can be used with GitGitGadget
Fix missing single-quote pairs in a documentation page.
* kh/doc-interpret-trailers-markup-fix:
doc: interpret-trailers: close all pairs of single quotes
The command Revert Changes has two different erroneous behaviors
depending on the Tcl version used.
The command uses a "chord" facility where different "notes" are
evaluated asynchronously and any error is reported after all of them
have finished. The intent is that a private namespace is used where
the notes can store the error state. Tcl 9 changed namespace handling
in a subtle way, as https://www.tcl-lang.org/software/tcltk/9.0.html
summarizes under "Notable incompatibilities":
Unqualified varnames resolved in current namespace, not global.
Note that in almost all cases where this causes a change, the
change is actually the removal of a latent bug.
And that's exactly what happens here.
- Under Tcl 9:
- When the command operates without any errors, the variable `err`
is never set. When the error handler wants to inspect `err` (in
the correct private namespace), it does not find it and a Tcl
error about an unset variable occurs. Incidentally, this is also
the case when the user cancels the operation with the option
"Do Nothing"!
On the other hand, when an error occurs during the operation, `err`
is set and found as intended.
Check for the existence of the variable `err` before the attempt to
read it.
- Under Tcl 8.6:
The error handler looks up `err` in the global namespace, which is
bogus and unintended. The variable is set due to the many
`catch ... err` that occur during startup in the global namespace.
- When the command operates without any errors, the error handler
finds the global `err`, which happens to be the empty string at
this point, and no error is reported.
On the other hand, when an error occurs during the operation, the
global `err` is set and found, so that an error is reported as
desired.
However, the value of `err` persists in the global namespace. When
the command is repeated, an error is reported again, even if there
was actually no error, and even "Do Nothing" was used to cancel
the operation.
Clear the global `err` before the operation begins.
The lingering error message is not a problem under Tcl 9, because a
prestine namespace is established every time the command is used.
This fixes https://github.com/j6t/git-gui/issues/21.
Helped-by: Igor Stepushchik
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Git does not really "store the contents of the next commit"
anywhere; rather, you the user use the index to prepare it.
Signed-off-by: Julia Evans <julia@jvns.ca>
[jc; made the change relative to what is already in 'next']
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When comparing large commit ranges (e.g., 250,000+ commits), range-diff
attempts to allocate an n×n cost matrix that can exhaust available
memory. For example, with 256,784 commits (n = 513,568), the matrix
would require approximately 256GB of memory (513,568² × 4 bytes),
causing either immediate segmentation faults due to integer overflow or
system hangs.
Add a memory limit check in get_correspondences() before allocating the
cost matrix. This check uses the total size in bytes (n² × sizeof(int))
and compares it against a configurable maximum, preventing both
excessive memory usage and integer overflow issues.
The limit is configurable via a new --max-memory option that accepts
human-readable sizes (e.g., "1G", "500M"). The default is 4GB for 64 bit
systems and 2GB for 32 bit systems. This allows comparing ranges of
approximately 32,000 (16,000) commits - generous for real-world use cases
while preventing impractical operations.
When the limit is exceeded, range-diff now displays a clear error
message showing both the requested memory size and the maximum allowed,
formatted in human-readable units for better user experience.
Example usage:
git range-diff --max-memory=1G branch1...branch2
git range-diff --max-memory=500M base..topic1 base..topic2
This approach was chosen over alternatives:
- Pre-counting commits: Would require spawning additional git processes
and reading all commits twice
- Limiting by commit count: Less precise than actual memory usage
- Streaming approach: Would require significant refactoring of the
current algorithm
This issue was previously discussed in:
https://lore.kernel.org/git/RFC-cover-v2-0.5-00000000000-20211210T122901Z-avarab@gmail.com/
Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Paulo Casaretto <pcasaretto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git describe <blob>" misbehaves and/or crashes in some corner
cases, which has been taught to exit with failure gracefully.
* jk/describe-blob:
describe: pass commit to describe_commit()
describe: handle blob traversal with no commits
describe: catch unborn branch in describe_blob()
describe: error if blob not found
describe: pass oid struct by const pointer
"git fetch" can clobber a symref that is dangling when the
remote-tracking HEAD is set to auto update, which has been
corrected.
* jk/no-clobber-dangling-symref-with-fetch:
refs: do not clobber dangling symrefs
t5510: prefer "git -C" to subshell for followRemoteHEAD tests
t5510: stop changing top-level working directory
t5510: make confusing config cleanup more explicit
Discord has been added to the first contribution documentation as
another way to ask for help.
* ds/doc-community-discord:
doc: add discord to ways of getting help
Code clean-ups.
* ps/reftable-libgit2-cleanup:
refs/reftable: always reload stacks when creating lock
reftable: don't second-guess errors from flock interface
reftable/stack: handle outdated stacks when compacting
reftable/stack: allow passing flags to `reftable_stack_add()`
reftable/stack: fix compiler warning due to missing braces
reftable/stack: reorder code to avoid forward declarations
reftable/writer: drop Git-specific `QSORT()` macro
reftable/writer: fix type used for number of records
Our 'git last-modified' performs a revision walk, and computes a diff at
each point in the walk to figure out whether a given revision changed
any of the paths it considers interesting.
When changed-path Bloom filters are available, we can avoid computing
many such diffs. Before computing a diff, we first check if any of the
remaining paths of interest were possibly changed at a given commit by
consulting its Bloom filter. If any of them are, we are resigned to
compute the diff.
If none of those queries returned "maybe", we know that the given commit
doesn't contain any changed paths which are interesting to us. So, we
can avoid computing it in this case.
Comparing the perf test results on git.git:
Test HEAD~ HEAD
------------------------------------------------------------------------------------
8020.1: top-level last-modified 4.49(4.34+0.11) 2.22(2.05+0.09) -50.6%
8020.2: top-level recursive last-modified 5.64(5.45+0.11) 5.62(5.30+0.11) -0.4%
8020.3: subdir last-modified 0.11(0.06+0.04) 0.07(0.03+0.04) -36.4%
Based-on-patch-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Toon Claes <toon@iotcl.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This just runs some simple last-modified commands. We already test
correctness in the regular suite, so this is just about finding
performance regressions from one version to another.
Based-on-patch-by: Jeff King <peff@peff.net>
Signed-off-by: Toon Claes <toon@iotcl.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Similar to git-blame(1), introduce a new subcommand
git-last-modified(1). This command shows the most recent modification to
paths in a tree. It does so by expanding the tree at a given commit,
taking note of the current state of each path, and then walking
backwards through history looking for commits where each path changed
into its final commit ID.
Based-on-patch-by: Jeff King <peff@peff.net>
Improved-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Toon Claes <toon@iotcl.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>