Merge branch 'jt/push-avoid-lazy-fetch'

Performance hack.

* jt/push-avoid-lazy-fetch:
  send-pack: never fetch when checking exclusions
diff --git a/Documentation/RelNotes/2.24.0.txt b/Documentation/RelNotes/2.24.0.txt
index b71763e..bda29d2 100644
--- a/Documentation/RelNotes/2.24.0.txt
+++ b/Documentation/RelNotes/2.24.0.txt
@@ -68,6 +68,9 @@
  * A configuration variable tells "git fetch" to write the commit
    graph after finishing.
 
+ * "git add -i" has been taught to show the total number of hunks and
+   the hunks that has been processed so far when showing prompts.
+
 
 Performance, Internal Implementation, Development Support etc.
 
@@ -123,6 +126,8 @@
 
  * Update the way build artifacts in t/helper/ directory are ignored.
 
+ * Preparation for SHA-256 upgrade continues.
+
 
 Fixes since v2.23
 -----------------
@@ -254,6 +259,44 @@
    directory has been fixed.
    (merge 83e3ad3b12 jt/merge-recursive-symlink-is-not-a-dir-in-way later to maint).
 
+ * The rename detection logic sorts a list of rename source candidates
+   by similarity to pick the best candidate, which means that a tie
+   between sources with the same similarity is broken by the original
+   location in the original candidate list (which is sorted by path).
+   Force the sorting by similarity done with a stable sort, which is
+   not promised by system supplied qsort(3), to ensure consistent
+   results across platforms.
+   (merge 2049b8dc65 js/diff-rename-force-stable-sort later to maint).
+
+ * The code to skip "UTF" and "UTF-" prefix, when computing an advice
+   message, did not work correctly when the prefix was "UTF", which
+   has been fixed.
+   (merge b181676ce9 rs/convert-fix-utf-without-dash later to maint).
+
+ * The author names taken from SVN repositories may have extra leading
+   or trailing whitespaces, which are now munged away.
+   (merge 4ddd4bddb1 tk/git-svn-trim-author-name later to maint).
+
+ * "git rebase -i" showed a wrong HEAD while "reword" open the editor.
+   (merge b0a3186140 pw/rebase-i-show-HEAD-to-reword later to maint).
+
+ * A few simplification and bugfixes to PCRE interface.
+   (merge c581e4a749 ab/pcre-jit-fixes later to maint).
+
+ * PCRE fixes.
+   (merge ff61681b46 cb/pcre1-cleanup later to maint).
+
+ * "git range-diff" segfaulted when diff.noprefix configuration was
+   used, as it blindly expected the patch it internally generates to
+   have the standard a/ and b/ prefixes.  The command now forces the
+   internal patch to be built without any prefix, not to be affected
+   by any end-user configuration.
+   (merge 937b76ed49 js/range-diff-noprefix later to maint).
+
+ * "git stash apply" in a subdirectory of a secondary worktree failed
+   to access the worktree correctly, which has been corrected.
+   (merge dfd557c978 js/stash-apply-in-secondary-worktree later to maint).
+
  * Other code cleanup, docfix, build fix, etc.
    (merge d1387d3895 en/fast-import-merge-doc later to maint).
    (merge 1c24a54ea4 bm/repository-layout-typofix later to maint).
@@ -285,3 +328,10 @@
    (merge 24c681794f ps/my-first-contribution-alphasort later to maint).
    (merge 75b2c15435 cb/do-not-use-test-cmp-with-a later to maint).
    (merge cda0d497e3 bw/submodule-helper-usage-fix later to maint).
+   (merge fe0ed5d5e9 am/visual-studio-config-fix later to maint).
+   (merge 2e09c01232 sg/name-rev-cutoff-underflow-fix later to maint).
+   (merge ddb3c856f3 as/shallow-slab-use-fix later to maint).
+   (merge 71f4960b91 js/mingw-spawn-with-spaces-in-path later to maint).
+   (merge 53d687bf5f ah/cleanups later to maint).
+   (merge f537485fa5 rs/test-remove-useless-debugging-cat later to maint).
+   (merge 11a3d3aadd dl/rev-list-doc-cleanup later to maint).
diff --git a/Documentation/config/fetch.txt b/Documentation/config/fetch.txt
index e8cb205..f119402 100644
--- a/Documentation/config/fetch.txt
+++ b/Documentation/config/fetch.txt
@@ -70,6 +70,16 @@
 	linkgit:git-fetch[1] and linkgit:git-pull[1] commands.
 	Defaults to true.
 
+fetch.parallel::
+	Specifies the maximal number of fetch operations to be run in parallel
+	at a time (submodules, or remotes when the `--multiple` option of
+	linkgit:git-fetch[1] is in effect).
++
+A value of 0 will give some reasonable default. If unset, it defaults to 1.
++
+For submodules, this setting can be overridden using the `submodule.fetchJobs`
+config setting.
+
 fetch.writeCommitGraph::
 	Set to true to write a commit-graph after every `git fetch` command
 	that downloads a pack-file from a remote. Using the `--split` option,
diff --git a/Documentation/config/trace2.txt b/Documentation/config/trace2.txt
index 2edbfb0..4ce0b9a 100644
--- a/Documentation/config/trace2.txt
+++ b/Documentation/config/trace2.txt
@@ -54,3 +54,9 @@
 	By default, these errors are suppressed and tracing is
 	silently disabled.  May be overridden by the
 	`GIT_TRACE2_DST_DEBUG` environment variable.
+
+trace2.maxFiles::
+	Integer.  When writing trace files to a target directory, do not
+	write additional traces if we would exceed this many files. Instead,
+	write a sentinel file that will block further tracing to this
+	directory. Defaults to 0, which disables this check.
diff --git a/Documentation/fetch-options.txt b/Documentation/fetch-options.txt
index 99df1f3..43b9ff3 100644
--- a/Documentation/fetch-options.txt
+++ b/Documentation/fetch-options.txt
@@ -160,10 +160,15 @@
 
 -j::
 --jobs=<n>::
-	Number of parallel children to be used for fetching submodules.
-	Each will fetch from different submodules, such that fetching many
-	submodules will be faster. By default submodules will be fetched
-	one at a time.
+	Number of parallel children to be used for all forms of fetching.
++
+If the `--multiple` option was specified, the different remotes will be fetched
+in parallel. If multiple submodules are fetched, they will be fetched in
+parallel. To control them independently, use the config settings
+`fetch.parallel` and `submodule.fetchJobs` (see linkgit:git-config[1]).
++
+Typically, parallel recursive and multi-remote fetches will be faster. By
+default fetches are performed sequentially, not in parallel.
 
 --no-recurse-submodules::
 	Disable recursive fetching of submodules (this has the same effect as
diff --git a/Documentation/git-clean.txt b/Documentation/git-clean.txt
index 0028ff1..a7f309d 100644
--- a/Documentation/git-clean.txt
+++ b/Documentation/git-clean.txt
@@ -26,18 +26,20 @@
 OPTIONS
 -------
 -d::
-	Remove untracked directories in addition to untracked files.
-	If an untracked directory is managed by a different Git
-	repository, it is not removed by default.  Use -f option twice
-	if you really want to remove such a directory.
+	Normally, when no <path> is specified, git clean will not
+	recurse into untracked directories to avoid removing too much.
+	Specify -d to have it recurse into such directories as well.
+	If any paths are specified, -d is irrelevant; all untracked
+	files matching the specified paths (with exceptions for nested
+	git directories mentioned under `--force`) will be removed.
 
 -f::
 --force::
 	If the Git configuration variable clean.requireForce is not set
 	to false, 'git clean' will refuse to delete files or directories
-	unless given -f, -n or -i. Git will refuse to delete directories
-	with .git sub directory or file unless a second -f
-	is given.
+	unless given -f or -i.  Git will refuse to modify untracked
+	nested git repositories (directories with a .git subdirectory)
+	unless a second -f is given.
 
 -i::
 --interactive::
diff --git a/Documentation/git-fast-export.txt b/Documentation/git-fast-export.txt
index 784e934..37634bf 100644
--- a/Documentation/git-fast-export.txt
+++ b/Documentation/git-fast-export.txt
@@ -75,11 +75,20 @@
 	Before processing any input, load the marks specified in
 	<file>.  The input file must exist, must be readable, and
 	must use the same format as produced by --export-marks.
+
+--mark-tags::
+	In addition to labelling blobs and commits with mark ids, also
+	label tags.  This is useful in conjunction with
+	`--export-marks` and `--import-marks`, and is also useful (and
+	necessary) for exporting of nested tags.  It does not hurt
+	other cases and would be the default, but many fast-import
+	frontends are not prepared to accept tags with mark
+	identifiers.
 +
-Any commits that have already been marked will not be exported again.
-If the backend uses a similar --import-marks file, this allows for
-incremental bidirectional exporting of the repository by keeping the
-marks the same across runs.
+Any commits (or tags) that have already been marked will not be
+exported again.  If the backend uses a similar --import-marks file,
+this allows for incremental bidirectional exporting of the repository
+by keeping the marks the same across runs.
 
 --fake-missing-tagger::
 	Some old repositories have tags without a tagger.  The
diff --git a/Documentation/git-fast-import.txt b/Documentation/git-fast-import.txt
index 0bb2762..a3f1e0c 100644
--- a/Documentation/git-fast-import.txt
+++ b/Documentation/git-fast-import.txt
@@ -337,6 +337,13 @@
 	`commit` command.  This command is optional and is not
 	needed to perform an import.
 
+`alias`::
+	Record that a mark refers to a given object without first
+	creating any new object.  Using --import-marks and referring
+	to missing marks will cause fast-import to fail, so aliases
+	can provide a way to set otherwise pruned commits to a valid
+	value (e.g. the nearest non-pruned ancestor).
+
 `checkpoint`::
 	Forces fast-import to close the current packfile, generate its
 	unique SHA-1 checksum and index, and start a new packfile.
@@ -774,6 +781,7 @@
 
 ....
 	'tag' SP <name> LF
+	mark?
 	'from' SP <commit-ish> LF
 	original-oid?
 	'tagger' (SP <name>)? SP LT <email> GT SP <when> LF
@@ -913,6 +921,21 @@
 +
 The `LF` after `<delim> LF` is optional (it used to be required).
 
+`alias`
+~~~~~~~
+Record that a mark refers to a given object without first creating any
+new object.
+
+....
+	'alias' LF
+	mark
+	'to' SP <commit-ish> LF
+	LF?
+....
+
+For a detailed description of `<commit-ish>` see above under `from`.
+
+
 `checkpoint`
 ~~~~~~~~~~~~
 Forces fast-import to close the current packfile, start a new one, and to
diff --git a/Documentation/git-grep.txt b/Documentation/git-grep.txt
index 2d27969..c89fb56 100644
--- a/Documentation/git-grep.txt
+++ b/Documentation/git-grep.txt
@@ -271,6 +271,23 @@
 
 -f <file>::
 	Read patterns from <file>, one per line.
++
+Passing the pattern via <file> allows for providing a search pattern
+containing a \0.
++
+Not all pattern types support patterns containing \0. Git will error
+out if a given pattern type can't support such a pattern. The
+`--perl-regexp` pattern type when compiled against the PCRE v2 backend
+has the widest support for these types of patterns.
++
+In versions of Git before 2.23.0 patterns containing \0 would be
+silently considered fixed. This was never documented, there were also
+odd and undocumented interactions between e.g. non-ASCII patterns
+containing \0 and `--ignore-case`.
++
+In future versions we may learn to support patterns containing \0 for
+more search backends, until then we'll die when the pattern type in
+question doesn't support them.
 
 -e::
 	The next parameter is the pattern. This option has to be
diff --git a/Documentation/git-gui.txt b/Documentation/git-gui.txt
index 5f93f80..c9d7e96 100644
--- a/Documentation/git-gui.txt
+++ b/Documentation/git-gui.txt
@@ -112,15 +112,9 @@
 versions are distributed as part of the Git suite for the convenience
 of end users.
 
-A 'git gui' development repository can be obtained from:
+The official repository of the 'git gui' project can be found at:
 
-  git clone git://repo.or.cz/git-gui.git
-
-or
-
-  git clone http://repo.or.cz/r/git-gui.git
-
-or browsed online at http://repo.or.cz/w/git-gui.git/[].
+  https://github.com/prati0100/git-gui.git/
 
 GIT
 ---
diff --git a/Documentation/git-rev-list.txt b/Documentation/git-rev-list.txt
index 9392760..025c911 100644
--- a/Documentation/git-rev-list.txt
+++ b/Documentation/git-rev-list.txt
@@ -9,59 +9,7 @@
 SYNOPSIS
 --------
 [verse]
-'git rev-list' [ --max-count=<number> ]
-	     [ --skip=<number> ]
-	     [ --max-age=<timestamp> ]
-	     [ --min-age=<timestamp> ]
-	     [ --sparse ]
-	     [ --merges ]
-	     [ --no-merges ]
-	     [ --min-parents=<number> ]
-	     [ --no-min-parents ]
-	     [ --max-parents=<number> ]
-	     [ --no-max-parents ]
-	     [ --first-parent ]
-	     [ --remove-empty ]
-	     [ --full-history ]
-	     [ --not ]
-	     [ --all ]
-	     [ --branches[=<pattern>] ]
-	     [ --tags[=<pattern>] ]
-	     [ --remotes[=<pattern>] ]
-	     [ --glob=<glob-pattern> ]
-	     [ --ignore-missing ]
-	     [ --stdin ]
-	     [ --quiet ]
-	     [ --topo-order ]
-	     [ --parents ]
-	     [ --timestamp ]
-	     [ --left-right ]
-	     [ --left-only ]
-	     [ --right-only ]
-	     [ --cherry-mark ]
-	     [ --cherry-pick ]
-	     [ --encoding=<encoding> ]
-	     [ --(author|committer|grep)=<pattern> ]
-	     [ --regexp-ignore-case | -i ]
-	     [ --extended-regexp | -E ]
-	     [ --fixed-strings | -F ]
-	     [ --date=<format>]
-	     [ [ --objects | --objects-edge | --objects-edge-aggressive ]
-	       [ --unpacked ]
-	       [ --object-names | --no-object-names ]
-	       [ --filter=<filter-spec> [ --filter-print-omitted ] ] ]
-	     [ --missing=<missing-action> ]
-	     [ --pretty | --header ]
-	     [ --bisect ]
-	     [ --bisect-vars ]
-	     [ --bisect-all ]
-	     [ --merge ]
-	     [ --reverse ]
-	     [ --walk-reflogs ]
-	     [ --no-walk ] [ --do-walk ]
-	     [ --count ]
-	     [ --use-bitmap-index ]
-	     <commit>... [ \-- <paths>... ]
+'git rev-list' [<options>] <commit>... [[--] <path>...]
 
 DESCRIPTION
 -----------
diff --git a/Documentation/technical/api-trace2.txt b/Documentation/technical/api-trace2.txt
index 71eb081..a045dbe 100644
--- a/Documentation/technical/api-trace2.txt
+++ b/Documentation/technical/api-trace2.txt
@@ -128,7 +128,7 @@
 
 ------------
 $ cat ~/log.event
-{"event":"version","sid":"sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.620713Z","file":"common-main.c","line":38,"evt":"1","exe":"2.20.1.155.g426c96fcdb"}
+{"event":"version","sid":"sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.620713Z","file":"common-main.c","line":38,"evt":"2","exe":"2.20.1.155.g426c96fcdb"}
 {"event":"start","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621027Z","file":"common-main.c","line":39,"t_abs":0.001173,"argv":["git","version"]}
 {"event":"cmd_name","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621122Z","file":"git.c","line":432,"name":"version","hierarchy":"version"}
 {"event":"exit","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621236Z","file":"git.c","line":662,"t_abs":0.001227,"code":0}
@@ -142,10 +142,9 @@
 
 include::../trace2-target-values.txt[]
 
-If the target already exists and is a directory, the traces will be
-written to files (one per process) underneath the given directory. They
-will be named according to the last component of the SID (optionally
-followed by a counter to avoid filename collisions).
+When trace files are written to a target directory, they will be named according
+to the last component of the SID (optionally followed by a counter to avoid
+filename collisions).
 
 == Trace2 API
 
@@ -605,17 +604,35 @@
 ==== Event-Specific Key/Value Pairs
 
 `"version"`::
-	This event gives the version of the executable and the EVENT format.
+	This event gives the version of the executable and the EVENT format. It
+	should always be the first event in a trace session. The EVENT format
+	version will be incremented if new event types are added, if existing
+	fields are removed, or if there are significant changes in
+	interpretation of existing events or fields. Smaller changes, such as
+	adding a new field to an existing event, will not require an increment
+	to the EVENT format version.
 +
 ------------
 {
 	"event":"version",
 	...
-	"evt":"1",		       # EVENT format version
+	"evt":"2",		       # EVENT format version
 	"exe":"2.20.1.155.g426c96fcdb" # git version
 }
 ------------
 
+`"discard"`::
+	This event is written to the git-trace2-discard sentinel file if there
+	are too many files in the target trace directory (see the
+	trace2.maxFiles config option).
++
+------------
+{
+	"event":"discard",
+	...
+}
+------------
+
 `"start"`::
 	This event contains the complete argv received by main().
 +
diff --git a/Documentation/trace2-target-values.txt b/Documentation/trace2-target-values.txt
index 27d3c64..3985b6d 100644
--- a/Documentation/trace2-target-values.txt
+++ b/Documentation/trace2-target-values.txt
@@ -2,7 +2,9 @@
 * `0` or `false` - Disables the target.
 * `1` or `true` - Writes to `STDERR`.
 * `[2-9]` - Writes to the already opened file descriptor.
-* `<absolute-pathname>` - Writes to the file in append mode.
+* `<absolute-pathname>` - Writes to the file in append mode. If the target
+already exists and is a directory, the traces will be written to files (one
+per process) underneath the given directory.
 * `af_unix:[<socket_type>:]<absolute-pathname>` - Write to a
 Unix DomainSocket (on platforms that support them).  Socket
 type can be either `stream` or `dgram`; if omitted Git will
diff --git a/Makefile b/Makefile
index d644527..de60c8e 100644
--- a/Makefile
+++ b/Makefile
@@ -34,13 +34,8 @@
 # library. Support for version 1 will likely be removed in some future
 # release of Git, as upstream has all but abandoned it.
 #
-# When using USE_LIBPCRE1, define NO_LIBPCRE1_JIT if the PCRE v1
-# library is compiled without --enable-jit. We will auto-detect
-# whether the version of the PCRE v1 library in use has JIT support at
-# all, but we unfortunately can't auto-detect whether JIT support
-# hasn't been compiled in in an otherwise JIT-supporting version. If
-# you have link-time errors about a missing `pcre_jit_exec` define
-# this, or recompile PCRE v1 with --enable-jit.
+# When using USE_LIBPCRE1, define NO_LIBPCRE1_JIT if you want to
+# disable JIT even if supported by your library.
 #
 # Define LIBPCREDIR=/foo/bar if your PCRE header and library files are
 # in /foo/bar/include and /foo/bar/lib directories. Which version of
@@ -986,6 +981,7 @@
 LIB_OBJS += sideband.o
 LIB_OBJS += sigchain.o
 LIB_OBJS += split-index.o
+LIB_OBJS += stable-qsort.o
 LIB_OBJS += strbuf.o
 LIB_OBJS += streaming.o
 LIB_OBJS += string-list.o
@@ -1731,7 +1727,6 @@
 endif
 ifdef INTERNAL_QSORT
 	COMPAT_CFLAGS += -DINTERNAL_QSORT
-	COMPAT_OBJS += compat/qsort.o
 endif
 ifdef HAVE_ISO_QSORT_S
 	COMPAT_CFLAGS += -DHAVE_ISO_QSORT_S
@@ -3047,6 +3042,10 @@
 	@false
 .PHONY: rpm
 
+ifneq ($(INCLUDE_DLLS_IN_ARTIFACTS),)
+OTHER_PROGRAMS += $(shell echo *.dll t/helper/*.dll)
+endif
+
 artifacts-tar:: $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS) \
 		GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
 		$(MOFILES)
diff --git a/apply.c b/apply.c
index 57a61f2..f8a046a 100644
--- a/apply.c
+++ b/apply.c
@@ -1361,11 +1361,32 @@
 			if (check_header_line(*linenr, patch))
 				return -1;
 			if (res > 0)
-				return offset;
+				goto done;
 			break;
 		}
 	}
 
+done:
+	if (!patch->old_name && !patch->new_name) {
+		if (!patch->def_name) {
+			error(Q_("git diff header lacks filename information when removing "
+				 "%d leading pathname component (line %d)",
+				 "git diff header lacks filename information when removing "
+				 "%d leading pathname components (line %d)",
+				 parse_hdr_state.p_value),
+			      parse_hdr_state.p_value, *linenr);
+			return -128;
+		}
+		patch->old_name = xstrdup(patch->def_name);
+		patch->new_name = xstrdup(patch->def_name);
+	}
+	if ((!patch->new_name && !patch->is_delete) ||
+	    (!patch->old_name && !patch->is_new)) {
+		error(_("git diff header lacks filename information "
+			"(line %d)"), *linenr);
+		return -128;
+	}
+	patch->is_toplevel_relative = 1;
 	return offset;
 }
 
@@ -1546,26 +1567,6 @@
 				return -128;
 			if (git_hdr_len <= len)
 				continue;
-			if (!patch->old_name && !patch->new_name) {
-				if (!patch->def_name) {
-					error(Q_("git diff header lacks filename information when removing "
-							"%d leading pathname component (line %d)",
-							"git diff header lacks filename information when removing "
-							"%d leading pathname components (line %d)",
-							state->p_value),
-						     state->p_value, state->linenr);
-					return -128;
-				}
-				patch->old_name = xstrdup(patch->def_name);
-				patch->new_name = xstrdup(patch->def_name);
-			}
-			if ((!patch->new_name && !patch->is_delete) ||
-			    (!patch->old_name && !patch->is_new)) {
-				error(_("git diff header lacks filename information "
-					     "(line %d)"), state->linenr);
-				return -128;
-			}
-			patch->is_toplevel_relative = 1;
 			*hdrsize = git_hdr_len;
 			return offset;
 		}
diff --git a/attr.c b/attr.c
index d02d081..11f19b5 100644
--- a/attr.c
+++ b/attr.c
@@ -62,7 +62,7 @@
 
 /* The container for objects stored in "struct attr_hashmap" */
 struct attr_hash_entry {
-	struct hashmap_entry ent; /* must be the first member! */
+	struct hashmap_entry ent;
 	const char *key; /* the key; memory should be owned by value */
 	size_t keylen; /* length of the key */
 	void *value; /* the stored value */
@@ -70,12 +70,14 @@
 
 /* attr_hashmap comparison function */
 static int attr_hash_entry_cmp(const void *unused_cmp_data,
-			       const void *entry,
-			       const void *entry_or_key,
+			       const struct hashmap_entry *eptr,
+			       const struct hashmap_entry *entry_or_key,
 			       const void *unused_keydata)
 {
-	const struct attr_hash_entry *a = entry;
-	const struct attr_hash_entry *b = entry_or_key;
+	const struct attr_hash_entry *a, *b;
+
+	a = container_of(eptr, const struct attr_hash_entry, ent);
+	b = container_of(entry_or_key, const struct attr_hash_entry, ent);
 	return (a->keylen != b->keylen) || strncmp(a->key, b->key, a->keylen);
 }
 
@@ -98,10 +100,10 @@
 	if (!map->map.tablesize)
 		attr_hashmap_init(map);
 
-	hashmap_entry_init(&k, memhash(key, keylen));
+	hashmap_entry_init(&k.ent, memhash(key, keylen));
 	k.key = key;
 	k.keylen = keylen;
-	e = hashmap_get(&map->map, &k, NULL);
+	e = hashmap_get_entry(&map->map, &k, ent, NULL);
 
 	return e ? e->value : NULL;
 }
@@ -117,12 +119,12 @@
 		attr_hashmap_init(map);
 
 	e = xmalloc(sizeof(struct attr_hash_entry));
-	hashmap_entry_init(e, memhash(key, keylen));
+	hashmap_entry_init(&e->ent, memhash(key, keylen));
 	e->key = key;
 	e->keylen = keylen;
 	e->value = value;
 
-	hashmap_add(&map->map, e);
+	hashmap_add(&map->map, &e->ent);
 }
 
 struct all_attrs_item {
@@ -161,12 +163,12 @@
 	if (size != check->all_attrs_nr) {
 		struct attr_hash_entry *e;
 		struct hashmap_iter iter;
-		hashmap_iter_init(&map->map, &iter);
 
 		REALLOC_ARRAY(check->all_attrs, size);
 		check->all_attrs_nr = size;
 
-		while ((e = hashmap_iter_next(&iter))) {
+		hashmap_for_each_entry(&map->map, &iter, e,
+					ent /* member name */) {
 			const struct git_attr *a = e->value;
 			check->all_attrs[a->attr_nr].attr = a;
 		}
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index cb9ab1e..9f099b9 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -1,6 +1,5 @@
-resources:
-- repo: self
-  fetchDepth: 1
+variables:
+  Agent.Source.Git.ShallowFetchDepth: 1
 
 jobs:
 - job: windows_build
@@ -131,6 +130,165 @@
       PathtoPublish: t/failed-test-artifacts
       ArtifactName: failed-test-artifacts
 
+- job: vs_build
+  displayName: Visual Studio Build
+  condition: succeeded()
+  pool: Hosted VS2017
+  timeoutInMinutes: 240
+  steps:
+  - powershell: |
+      if ("$GITFILESHAREPWD" -ne "" -and "$GITFILESHAREPWD" -ne "`$`(gitfileshare.pwd)") {
+        net use s: \\gitfileshare.file.core.windows.net\test-cache "$GITFILESHAREPWD" /user:AZURE\gitfileshare /persistent:no
+        cmd /c mklink /d "$(Build.SourcesDirectory)\test-cache" S:\
+      }
+    displayName: 'Mount test-cache'
+    env:
+      GITFILESHAREPWD: $(gitfileshare.pwd)
+  - powershell: |
+      $urlbase = "https://dev.azure.com/git-for-windows/git/_apis/build/builds"
+      $id = ((Invoke-WebRequest -UseBasicParsing "${urlbase}?definitions=22&statusFilter=completed&resultFilter=succeeded&`$top=1").content | ConvertFrom-JSON).value[0].id
+      $downloadUrl = ((Invoke-WebRequest -UseBasicParsing "${urlbase}/$id/artifacts").content | ConvertFrom-JSON).value[1].resource.downloadUrl
+      (New-Object Net.WebClient).DownloadFile($downloadUrl,"git-sdk-64-minimal.zip")
+      Expand-Archive git-sdk-64-minimal.zip -DestinationPath . -Force
+      Remove-Item git-sdk-64-minimal.zip
+
+      # Let Git ignore the SDK and the test-cache
+      "/git-sdk-64-minimal/`n/test-cache/`n" | Out-File -NoNewLine -Encoding ascii -Append "$(Build.SourcesDirectory)\.git\info\exclude"
+    displayName: 'Download git-sdk-64-minimal'
+  - powershell: |
+      & git-sdk-64-minimal\usr\bin\bash.exe -lc @"
+        make vcxproj
+      "@
+      if (!$?) { exit(1) }
+    displayName: Generate Visual Studio Solution
+    env:
+      HOME: $(Build.SourcesDirectory)
+      MSYSTEM: MINGW64
+      DEVELOPER: 1
+      NO_PERL: 1
+      GIT_CONFIG_PARAMETERS: "'user.name=CI' 'user.email=ci@git'"
+  - powershell: |
+      $urlbase = "https://dev.azure.com/git/git/_apis/build/builds"
+      $id = ((Invoke-WebRequest -UseBasicParsing "${urlbase}?definitions=9&statusFilter=completed&resultFilter=succeeded&`$top=1").content | ConvertFrom-JSON).value[0].id
+      $downloadUrl = ((Invoke-WebRequest -UseBasicParsing "${urlbase}/$id/artifacts").content | ConvertFrom-JSON).value[0].resource.downloadUrl
+      (New-Object Net.WebClient).DownloadFile($downloadUrl, "compat.zip")
+      Expand-Archive compat.zip -DestinationPath . -Force
+      Remove-Item compat.zip
+    displayName: 'Download vcpkg artifacts'
+  - task: MSBuild@1
+    inputs:
+      solution: git.sln
+      platform: x64
+      configuration: Release
+      maximumCpuCount: 4
+  - powershell: |
+      & compat\vcbuild\vcpkg_copy_dlls.bat release
+      if (!$?) { exit(1) }
+      & git-sdk-64-minimal\usr\bin\bash.exe -lc @"
+        mkdir -p artifacts &&
+        eval \"`$(make -n artifacts-tar INCLUDE_DLLS_IN_ARTIFACTS=YesPlease ARTIFACTS_DIRECTORY=artifacts | grep ^tar)\"
+      "@
+      if (!$?) { exit(1) }
+    displayName: Bundle artifact tar
+    env:
+      HOME: $(Build.SourcesDirectory)
+      MSYSTEM: MINGW64
+      DEVELOPER: 1
+      NO_PERL: 1
+      MSVC: 1
+      VCPKG_ROOT: $(Build.SourcesDirectory)\compat\vcbuild\vcpkg
+  - powershell: |
+      $tag = (Invoke-WebRequest -UseBasicParsing "https://gitforwindows.org/latest-tag.txt").content
+      $version = (Invoke-WebRequest -UseBasicParsing "https://gitforwindows.org/latest-version.txt").content
+      $url = "https://github.com/git-for-windows/git/releases/download/${tag}/PortableGit-${version}-64-bit.7z.exe"
+      (New-Object Net.WebClient).DownloadFile($url,"PortableGit.exe")
+      & .\PortableGit.exe -y -oartifacts\PortableGit
+      # Wait until it is unpacked
+      while (-not @(Remove-Item -ErrorAction SilentlyContinue PortableGit.exe; $?)) { sleep 1 }
+    displayName: Download & extract portable Git
+  - task: PublishPipelineArtifact@0
+    displayName: 'Publish Pipeline Artifact: MSVC test artifacts'
+    inputs:
+      artifactName: 'vs-artifacts'
+      targetPath: '$(Build.SourcesDirectory)\artifacts'
+  - powershell: |
+      if ("$GITFILESHAREPWD" -ne "" -and "$GITFILESHAREPWD" -ne "`$`(gitfileshare.pwd)") {
+        cmd /c rmdir "$(Build.SourcesDirectory)\test-cache"
+      }
+    displayName: 'Unmount test-cache'
+    condition: true
+    env:
+      GITFILESHAREPWD: $(gitfileshare.pwd)
+
+- job: vs_test
+  displayName: Visual Studio Test
+  dependsOn: vs_build
+  condition: succeeded()
+  pool: Hosted
+  timeoutInMinutes: 240
+  strategy:
+    parallel: 10
+  steps:
+  - powershell: |
+      if ("$GITFILESHAREPWD" -ne "" -and "$GITFILESHAREPWD" -ne "`$`(gitfileshare.pwd)") {
+        net use s: \\gitfileshare.file.core.windows.net\test-cache "$GITFILESHAREPWD" /user:AZURE\gitfileshare /persistent:no
+        cmd /c mklink /d "$(Build.SourcesDirectory)\test-cache" S:\
+      }
+    displayName: 'Mount test-cache'
+    env:
+      GITFILESHAREPWD: $(gitfileshare.pwd)
+  - task: DownloadPipelineArtifact@0
+    displayName: 'Download Pipeline Artifact: VS test artifacts'
+    inputs:
+      artifactName: 'vs-artifacts'
+      targetPath: '$(Build.SourcesDirectory)'
+  - powershell: |
+      & PortableGit\git-cmd.exe --command=usr\bin\bash.exe -lc @"
+        test -f artifacts.tar.gz || {
+          echo No test artifacts found\; skipping >&2
+          exit 0
+        }
+        tar xf artifacts.tar.gz || exit 1
+
+        # Let Git ignore the SDK and the test-cache
+        printf '%s\n' /PortableGit/ /test-cache/ >>.git/info/exclude
+
+        cd t &&
+        PATH=\"`$PWD/helper:`$PATH\" &&
+        test-tool.exe run-command testsuite -V -x --write-junit-xml \
+                `$(test-tool.exe path-utils slice-tests \
+                        `$SYSTEM_JOBPOSITIONINPHASE `$SYSTEM_TOTALJOBSINPHASE t[0-9]*.sh)
+      "@
+      if (!$?) { exit(1) }
+    displayName: 'Test (parallel)'
+    env:
+      HOME: $(Build.SourcesDirectory)
+      MSYSTEM: MINGW64
+      NO_SVN_TESTS: 1
+      GIT_TEST_SKIP_REBASE_P: 1
+  - powershell: |
+      if ("$GITFILESHAREPWD" -ne "" -and "$GITFILESHAREPWD" -ne "`$`(gitfileshare.pwd)") {
+        cmd /c rmdir "$(Build.SourcesDirectory)\test-cache"
+      }
+    displayName: 'Unmount test-cache'
+    condition: true
+    env:
+      GITFILESHAREPWD: $(gitfileshare.pwd)
+  - task: PublishTestResults@2
+    displayName: 'Publish Test Results **/TEST-*.xml'
+    inputs:
+      mergeTestResults: true
+      testRunTitle: 'vs'
+      platform: Windows
+      publishRunAttachments: false
+    condition: succeededOrFailed()
+  - task: PublishBuildArtifacts@1
+    displayName: 'Publish trash directories of failed tests'
+    condition: failed()
+    inputs:
+      PathtoPublish: t/failed-test-artifacts
+      ArtifactName: failed-vs-test-artifacts
+
 - job: linux_clang
   displayName: linux-clang
   condition: succeeded()
diff --git a/bisect.c b/bisect.c
index e87ac29..e81c91d 100644
--- a/bisect.c
+++ b/bisect.c
@@ -707,7 +707,7 @@
 {
 	char bisect_rev_hex[GIT_MAX_HEXSZ + 1];
 
-	memcpy(bisect_rev_hex, oid_to_hex(bisect_rev), GIT_SHA1_HEXSZ + 1);
+	memcpy(bisect_rev_hex, oid_to_hex(bisect_rev), the_hash_algo->hexsz + 1);
 	update_ref(NULL, "BISECT_EXPECTED_REV", bisect_rev, NULL, 0, UPDATE_REFS_DIE_ON_ERR);
 
 	argv_checkout[2] = bisect_rev_hex;
diff --git a/blame.c b/blame.c
index 36a2e7e..29770e5 100644
--- a/blame.c
+++ b/blame.c
@@ -144,7 +144,7 @@
 
 	while (!strbuf_getwholeline_fd(&line, merge_head, '\n')) {
 		struct object_id oid;
-		if (line.len < GIT_SHA1_HEXSZ || get_oid_hex(line.buf, &oid))
+		if (get_oid_hex(line.buf, &oid))
 			die("unknown line in '%s': %s",
 			    git_path_merge_head(r), line.buf);
 		tail = append_parent(r, tail, &oid);
@@ -417,14 +417,15 @@
 		/* Ignore whitespace pairs */
 		if (hash == 0)
 			continue;
-		hashmap_entry_init(entry, hash);
+		hashmap_entry_init(&entry->entry, hash);
 
-		found_entry = hashmap_get(&result->map, entry, NULL);
+		found_entry = hashmap_get_entry(&result->map, entry,
+						/* member name */ entry, NULL);
 		if (found_entry) {
 			found_entry->count += 1;
 		} else {
 			entry->count = 1;
-			hashmap_add(&result->map, entry);
+			hashmap_add(&result->map, &entry->entry);
 			++entry;
 		}
 	}
@@ -432,7 +433,7 @@
 
 static void free_fingerprint(struct fingerprint *f)
 {
-	hashmap_free(&f->map, 0);
+	hashmap_free(&f->map);
 	free(f->entries);
 }
 
@@ -449,10 +450,10 @@
 	struct hashmap_iter iter;
 	const struct fingerprint_entry *entry_a, *entry_b;
 
-	hashmap_iter_init(&b->map, &iter);
-
-	while ((entry_b = hashmap_iter_next(&iter))) {
-		if ((entry_a = hashmap_get(&a->map, entry_b, NULL))) {
+	hashmap_for_each_entry(&b->map, &iter, entry_b,
+				entry /* member name */) {
+		entry_a = hashmap_get_entry(&a->map, entry_b, entry, NULL);
+		if (entry_a) {
 			intersection += entry_a->count < entry_b->count ?
 					entry_a->count : entry_b->count;
 		}
@@ -470,10 +471,12 @@
 
 	hashmap_iter_init(&b->map, &iter);
 
-	while ((entry_b = hashmap_iter_next(&iter))) {
-		if ((entry_a = hashmap_get(&a->map, entry_b, NULL))) {
+	hashmap_for_each_entry(&b->map, &iter, entry_b,
+				entry /* member name */) {
+		entry_a = hashmap_get_entry(&a->map, entry_b, entry, NULL);
+		if (entry_a) {
 			if (entry_a->count <= entry_b->count)
-				hashmap_remove(&a->map, entry_b, NULL);
+				hashmap_remove(&a->map, &entry_b->entry, NULL);
 			else
 				entry_a->count -= entry_b->count;
 		}
diff --git a/builtin/am.c b/builtin/am.c
index 39a70fc..8181c2a 100644
--- a/builtin/am.c
+++ b/builtin/am.c
@@ -24,7 +24,6 @@
 #include "sequencer.h"
 #include "revision.h"
 #include "merge-recursive.h"
-#include "revision.h"
 #include "log-tree.h"
 #include "notes-utils.h"
 #include "rerere.h"
@@ -1527,7 +1526,7 @@
 	o.branch1 = "HEAD";
 	their_tree_name = xstrfmt("%.*s", linelen(state->msg), state->msg);
 	o.branch2 = their_tree_name;
-	o.detect_directory_renames = 0;
+	o.detect_directory_renames = MERGE_DIRECTORY_RENAMES_NONE;
 
 	if (state->quiet)
 		o.verbosity = 0;
diff --git a/builtin/blame.c b/builtin/blame.c
index b6534d4..e946ba6 100644
--- a/builtin/blame.c
+++ b/builtin/blame.c
@@ -26,7 +26,6 @@
 #include "progress.h"
 #include "object-store.h"
 #include "blame.h"
-#include "string-list.h"
 #include "refs.h"
 
 static char blame_usage[] = N_("git blame [<options>] [<rev-opts>] [<rev>] [--] <file>");
@@ -460,7 +459,7 @@
 
 	for (cnt = 0; cnt < ent->num_lines; cnt++) {
 		char ch;
-		int length = (opt & OUTPUT_LONG_OBJECT_NAME) ? GIT_SHA1_HEXSZ : abbrev;
+		int length = (opt & OUTPUT_LONG_OBJECT_NAME) ? the_hash_algo->hexsz : abbrev;
 
 		if (opt & OUTPUT_COLOR_LINE) {
 			if (cnt > 0) {
@@ -885,6 +884,7 @@
 	struct range_set ranges;
 	unsigned int range_i;
 	long anchor;
+	const int hexsz = the_hash_algo->hexsz;
 
 	setup_default_color_by_age();
 	git_config(git_blame_config, &output_option);
@@ -931,11 +931,11 @@
 	} else if (show_progress < 0)
 		show_progress = isatty(2);
 
-	if (0 < abbrev && abbrev < GIT_SHA1_HEXSZ)
+	if (0 < abbrev && abbrev < hexsz)
 		/* one more abbrev length is needed for the boundary commit */
 		abbrev++;
 	else if (!abbrev)
-		abbrev = GIT_SHA1_HEXSZ;
+		abbrev = hexsz;
 
 	if (revs_file && read_ancestry(revs_file))
 		die_errno("reading graft file '%s' failed", revs_file);
diff --git a/builtin/checkout.c b/builtin/checkout.c
index 1283727..3634a3d 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -709,11 +709,11 @@
 			 * give up or do a real merge, depending on
 			 * whether the merge flag was used.
 			 */
-			struct tree *result;
 			struct tree *work;
 			struct tree *old_tree;
 			struct merge_options o;
 			struct strbuf sb = STRBUF_INIT;
+			struct strbuf old_commit_shortname = STRBUF_INIT;
 
 			if (!opts->merge)
 				return 1;
@@ -754,7 +754,7 @@
 			 */
 			init_merge_options(&o, the_repository);
 			o.verbosity = 0;
-			work = write_tree_from_memory(&o);
+			work = write_in_core_index_as_tree(the_repository);
 
 			ret = reset_tree(new_tree,
 					 opts, 1,
@@ -762,19 +762,25 @@
 			if (ret)
 				return ret;
 			o.ancestor = old_branch_info->name;
+			if (old_branch_info->name == NULL) {
+				strbuf_add_unique_abbrev(&old_commit_shortname,
+							 &old_branch_info->commit->object.oid,
+							 DEFAULT_ABBREV);
+				o.ancestor = old_commit_shortname.buf;
+			}
 			o.branch1 = new_branch_info->name;
 			o.branch2 = "local";
 			ret = merge_trees(&o,
 					  new_tree,
 					  work,
-					  old_tree,
-					  &result);
+					  old_tree);
 			if (ret < 0)
 				exit(128);
 			ret = reset_tree(new_tree,
 					 opts, 0,
 					 writeout_error);
 			strbuf_release(&o.obuf);
+			strbuf_release(&old_commit_shortname);
 			if (ret)
 				return ret;
 		}
diff --git a/builtin/clean.c b/builtin/clean.c
index 851beb7..5abf087 100644
--- a/builtin/clean.c
+++ b/builtin/clean.c
@@ -158,7 +158,8 @@
 
 	*dir_gone = 1;
 
-	if ((force_flag & REMOVE_DIR_KEEP_NESTED_GIT) && is_nonbare_repository_dir(path)) {
+	if ((force_flag & REMOVE_DIR_KEEP_NESTED_GIT) &&
+	    is_nonbare_repository_dir(path)) {
 		if (!quiet) {
 			quote_path_relative(path->buf, prefix, &quoted);
 			printf(dry_run ?  _(msg_would_skip_git_dir) : _(msg_skip_git_dir),
@@ -946,9 +947,19 @@
 
 	if (force > 1)
 		rm_flags = 0;
+	else
+		dir.flags |= DIR_SKIP_NESTED_GIT;
 
 	dir.flags |= DIR_SHOW_OTHER_DIRECTORIES;
 
+	if (argc) {
+		/*
+		 * Remaining args implies pathspecs specified, and we should
+		 * recurse within those.
+		 */
+		remove_directories = 1;
+	}
+
 	if (remove_directories)
 		dir.flags |= DIR_SHOW_IGNORED_TOO | DIR_KEEP_UNTRACKED_CONTENTS;
 
@@ -1007,6 +1018,7 @@
 	for_each_string_list_item(item, &del_list) {
 		struct stat st;
 
+		strbuf_reset(&abs_path);
 		if (prefix)
 			strbuf_addstr(&abs_path, prefix);
 
@@ -1040,7 +1052,6 @@
 				printf(dry_run ? _(msg_would_remove) : _(msg_remove), qname);
 			}
 		}
-		strbuf_reset(&abs_path);
 	}
 
 	strbuf_release(&abs_path);
diff --git a/builtin/clone.c b/builtin/clone.c
index 2048b67..c46ee29 100644
--- a/builtin/clone.c
+++ b/builtin/clone.c
@@ -32,7 +32,6 @@
 #include "connected.h"
 #include "packfile.h"
 #include "list-objects-filter-options.h"
-#include "object-store.h"
 
 /*
  * Overall FIXMEs:
@@ -785,7 +784,7 @@
 	if (write_locked_index(&the_index, &lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
-	err |= run_hook_le(NULL, "post-checkout", sha1_to_hex(null_sha1),
+	err |= run_hook_le(NULL, "post-checkout", oid_to_hex(&null_oid),
 			   oid_to_hex(&oid), "1", NULL);
 
 	if (!err && (option_recurse_submodules.nr > 0)) {
diff --git a/builtin/commit-graph.c b/builtin/commit-graph.c
index a6c500d..addc8d4 100644
--- a/builtin/commit-graph.c
+++ b/builtin/commit-graph.c
@@ -60,6 +60,8 @@
 		OPT_END(),
 	};
 
+	trace2_cmd_mode("verify");
+
 	opts.progress = isatty(2);
 	argc = parse_options(argc, argv, NULL,
 			     builtin_commit_graph_verify_options,
@@ -107,6 +109,8 @@
 		OPT_END(),
 	};
 
+	trace2_cmd_mode("read");
+
 	argc = parse_options(argc, argv, NULL,
 			     builtin_commit_graph_read_options,
 			     builtin_commit_graph_read_usage, 0);
@@ -190,6 +194,8 @@
 	split_opts.max_commits = 0;
 	split_opts.expire_time = 0;
 
+	trace2_cmd_mode("write");
+
 	argc = parse_options(argc, argv, NULL,
 			     builtin_commit_graph_write_options,
 			     builtin_commit_graph_write_usage, 0);
diff --git a/builtin/commit.c b/builtin/commit.c
index ae7aaf6..e588bc6 100644
--- a/builtin/commit.c
+++ b/builtin/commit.c
@@ -510,7 +510,7 @@
 	s->nowarn = nowarn;
 	s->is_initial = get_oid(s->reference, &oid) ? 1 : 0;
 	if (!s->is_initial)
-		hashcpy(s->sha1_commit, oid.hash);
+		oidcpy(&s->oid_commit, &oid);
 	s->status_format = status_format;
 	s->ignore_submodule_arg = ignore_submodule_arg;
 
@@ -1406,7 +1406,7 @@
 
 	s.is_initial = get_oid(s.reference, &oid) ? 1 : 0;
 	if (!s.is_initial)
-		hashcpy(s.sha1_commit, oid.hash);
+		oidcpy(&s.oid_commit, &oid);
 
 	s.ignore_submodule_arg = ignore_submodule_arg;
 	s.status_format = status_format;
diff --git a/builtin/describe.c b/builtin/describe.c
index e048f85..b6df81d 100644
--- a/builtin/describe.c
+++ b/builtin/describe.c
@@ -15,7 +15,6 @@
 #include "argv-array.h"
 #include "run-command.h"
 #include "object-store.h"
-#include "revision.h"
 #include "list-objects.h"
 #include "commit-slab.h"
 
@@ -64,19 +63,22 @@
 };
 
 static int commit_name_neq(const void *unused_cmp_data,
-			   const void *entry,
-			   const void *entry_or_key,
+			   const struct hashmap_entry *eptr,
+			   const struct hashmap_entry *entry_or_key,
 			   const void *peeled)
 {
-	const struct commit_name *cn1 = entry;
-	const struct commit_name *cn2 = entry_or_key;
+	const struct commit_name *cn1, *cn2;
+
+	cn1 = container_of(eptr, const struct commit_name, entry);
+	cn2 = container_of(entry_or_key, const struct commit_name, entry);
 
 	return !oideq(&cn1->peeled, peeled ? peeled : &cn2->peeled);
 }
 
 static inline struct commit_name *find_commit_name(const struct object_id *peeled)
 {
-	return hashmap_get_from_hash(&names, oidhash(peeled), peeled);
+	return hashmap_get_entry_from_hash(&names, oidhash(peeled), peeled,
+						struct commit_name, entry);
 }
 
 static int replace_name(struct commit_name *e,
@@ -123,8 +125,8 @@
 		if (!e) {
 			e = xmalloc(sizeof(struct commit_name));
 			oidcpy(&e->peeled, peeled);
-			hashmap_entry_init(e, oidhash(peeled));
-			hashmap_add(&names, e);
+			hashmap_entry_init(&e->entry, oidhash(peeled));
+			hashmap_add(&names, &e->entry);
 			e->path = NULL;
 		}
 		e->tag = tag;
@@ -330,8 +332,8 @@
 		struct commit_name *n;
 
 		init_commit_names(&commit_names);
-		n = hashmap_iter_first(&names, &iter);
-		for (; n; n = hashmap_iter_next(&iter)) {
+		hashmap_for_each_entry(&names, &iter, n,
+					entry /* member name */) {
 			c = lookup_commit_reference_gently(the_repository,
 							   &n->peeled, 1);
 			if (c)
diff --git a/builtin/difftool.c b/builtin/difftool.c
index 16eb8b7..c280e68 100644
--- a/builtin/difftool.c
+++ b/builtin/difftool.c
@@ -125,12 +125,15 @@
 };
 
 static int working_tree_entry_cmp(const void *unused_cmp_data,
-				  const void *entry,
-				  const void *entry_or_key,
+				  const struct hashmap_entry *eptr,
+				  const struct hashmap_entry *entry_or_key,
 				  const void *unused_keydata)
 {
-	const struct working_tree_entry *a = entry;
-	const struct working_tree_entry *b = entry_or_key;
+	const struct working_tree_entry *a, *b;
+
+	a = container_of(eptr, const struct working_tree_entry, entry);
+	b = container_of(entry_or_key, const struct working_tree_entry, entry);
+
 	return strcmp(a->path, b->path);
 }
 
@@ -145,12 +148,14 @@
 };
 
 static int pair_cmp(const void *unused_cmp_data,
-		    const void *entry,
-		    const void *entry_or_key,
+		    const struct hashmap_entry *eptr,
+		    const struct hashmap_entry *entry_or_key,
 		    const void *unused_keydata)
 {
-	const struct pair_entry *a = entry;
-	const struct pair_entry *b = entry_or_key;
+	const struct pair_entry *a, *b;
+
+	a = container_of(eptr, const struct pair_entry, entry);
+	b = container_of(entry_or_key, const struct pair_entry, entry);
 
 	return strcmp(a->path, b->path);
 }
@@ -161,14 +166,14 @@
 	struct pair_entry *e, *existing;
 
 	FLEX_ALLOC_STR(e, path, path);
-	hashmap_entry_init(e, strhash(path));
-	existing = hashmap_get(map, e, NULL);
+	hashmap_entry_init(&e->entry, strhash(path));
+	existing = hashmap_get_entry(map, e, entry, NULL);
 	if (existing) {
 		free(e);
 		e = existing;
 	} else {
 		e->left[0] = e->right[0] = '\0';
-		hashmap_add(map, e);
+		hashmap_add(map, &e->entry);
 	}
 	strlcpy(is_right ? e->right : e->left, content, PATH_MAX);
 }
@@ -179,12 +184,14 @@
 };
 
 static int path_entry_cmp(const void *unused_cmp_data,
-			  const void *entry,
-			  const void *entry_or_key,
+			  const struct hashmap_entry *eptr,
+			  const struct hashmap_entry *entry_or_key,
 			  const void *key)
 {
-	const struct path_entry *a = entry;
-	const struct path_entry *b = entry_or_key;
+	const struct path_entry *a, *b;
+
+	a = container_of(eptr, const struct path_entry, entry);
+	b = container_of(entry_or_key, const struct path_entry, entry);
 
 	return strcmp(a->path, key ? key : b->path);
 }
@@ -234,8 +241,8 @@
 	while (!strbuf_getline_nul(&buf, fp)) {
 		struct path_entry *entry;
 		FLEX_ALLOC_STR(entry, path, buf.buf);
-		hashmap_entry_init(entry, strhash(buf.buf));
-		hashmap_add(result, entry);
+		hashmap_entry_init(&entry->entry, strhash(buf.buf));
+		hashmap_add(result, &entry->entry);
 	}
 	fclose(fp);
 	if (finish_command(&diff_files))
@@ -461,12 +468,13 @@
 
 			/* Avoid duplicate working_tree entries */
 			FLEX_ALLOC_STR(entry, path, dst_path);
-			hashmap_entry_init(entry, strhash(dst_path));
-			if (hashmap_get(&working_tree_dups, entry, NULL)) {
+			hashmap_entry_init(&entry->entry, strhash(dst_path));
+			if (hashmap_get(&working_tree_dups, &entry->entry,
+					NULL)) {
 				free(entry);
 				continue;
 			}
-			hashmap_add(&working_tree_dups, entry);
+			hashmap_add(&working_tree_dups, &entry->entry);
 
 			if (!use_wt_file(workdir, dst_path, &roid)) {
 				if (checkout_path(rmode, &roid, dst_path,
@@ -530,8 +538,8 @@
 	 * temporary file to both the left and right directories to show the
 	 * change in the recorded SHA1 for the submodule.
 	 */
-	hashmap_iter_init(&submodules, &iter);
-	while ((entry = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(&submodules, &iter, entry,
+				entry /* member name */) {
 		if (*entry->left) {
 			add_path(&ldir, ldir_len, entry->path);
 			ensure_leading_directories(ldir.buf);
@@ -549,8 +557,8 @@
 	 * shows only the link itself, not the contents of the link target.
 	 * This loop replicates that behavior.
 	 */
-	hashmap_iter_init(&symlinks2, &iter);
-	while ((entry = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(&symlinks2, &iter, entry,
+				entry /* member name */) {
 		if (*entry->left) {
 			add_path(&ldir, ldir_len, entry->path);
 			ensure_leading_directories(ldir.buf);
diff --git a/builtin/fast-export.c b/builtin/fast-export.c
index f541f55..dbec4df 100644
--- a/builtin/fast-export.c
+++ b/builtin/fast-export.c
@@ -40,6 +40,7 @@
 static int full_tree;
 static int reference_excluded_commits;
 static int show_original_ids;
+static int mark_tags;
 static struct string_list extra_refs = STRING_LIST_INIT_NODUP;
 static struct string_list tag_refs = STRING_LIST_INIT_NODUP;
 static struct refspec refspecs = REFSPEC_INIT_FETCH;
@@ -126,10 +127,15 @@
 };
 
 static int anonymized_entry_cmp(const void *unused_cmp_data,
-				const void *va, const void *vb,
+				const struct hashmap_entry *eptr,
+				const struct hashmap_entry *entry_or_key,
 				const void *unused_keydata)
 {
-	const struct anonymized_entry *a = va, *b = vb;
+	const struct anonymized_entry *a, *b;
+
+	a = container_of(eptr, const struct anonymized_entry, hash);
+	b = container_of(entry_or_key, const struct anonymized_entry, hash);
+
 	return a->orig_len != b->orig_len ||
 		memcmp(a->orig, b->orig, a->orig_len);
 }
@@ -148,10 +154,10 @@
 	if (!map->cmpfn)
 		hashmap_init(map, anonymized_entry_cmp, NULL, 0);
 
-	hashmap_entry_init(&key, memhash(orig, *len));
+	hashmap_entry_init(&key.hash, memhash(orig, *len));
 	key.orig = orig;
 	key.orig_len = *len;
-	ret = hashmap_get(map, &key, NULL);
+	ret = hashmap_get_entry(map, &key, hash, NULL);
 
 	if (!ret) {
 		ret = xmalloc(sizeof(*ret));
@@ -160,7 +166,7 @@
 		ret->orig_len = *len;
 		ret->anon = generate(orig, len);
 		ret->anon_len = *len;
-		hashmap_put(map, ret);
+		hashmap_put(map, &ret->hash);
 	}
 
 	*len = ret->anon_len;
@@ -842,25 +848,40 @@
 			free(buf);
 			return;
 		case REWRITE:
-			if (tagged->type != OBJ_COMMIT) {
-				die("tag %s tags unexported %s!",
-				    oid_to_hex(&tag->object.oid),
-				    type_name(tagged->type));
+			if (tagged->type == OBJ_TAG && !mark_tags) {
+				die(_("Error: Cannot export nested tags unless --mark-tags is specified."));
+			} else if (tagged->type == OBJ_COMMIT) {
+				p = rewrite_commit((struct commit *)tagged);
+				if (!p) {
+					printf("reset %s\nfrom %s\n\n",
+					       name, oid_to_hex(&null_oid));
+					free(buf);
+					return;
+				}
+				tagged_mark = get_object_mark(&p->object);
+			} else {
+				/* tagged->type is either OBJ_BLOB or OBJ_TAG */
+				tagged_mark = get_object_mark(tagged);
 			}
-			p = rewrite_commit((struct commit *)tagged);
-			if (!p) {
-				printf("reset %s\nfrom %s\n\n",
-				       name, oid_to_hex(&null_oid));
-				free(buf);
-				return;
-			}
-			tagged_mark = get_object_mark(&p->object);
 		}
 	}
 
+	if (tagged->type == OBJ_TAG) {
+		printf("reset %s\nfrom %s\n\n",
+		       name, oid_to_hex(&null_oid));
+	}
 	if (starts_with(name, "refs/tags/"))
 		name += 10;
-	printf("tag %s\nfrom :%d\n", name, tagged_mark);
+	printf("tag %s\n", name);
+	if (mark_tags) {
+		mark_next_object(&tag->object);
+		printf("mark :%"PRIu32"\n", last_idnum);
+	}
+	if (tagged_mark)
+		printf("from :%d\n", tagged_mark);
+	else
+		printf("from %s\n", oid_to_hex(&tagged->oid));
+
 	if (show_original_ids)
 		printf("original-oid %s\n", oid_to_hex(&tag->object.oid));
 	printf("%.*s%sdata %d\n%.*s\n",
@@ -1047,11 +1068,16 @@
 		error("Unable to write marks file %s.", file);
 }
 
-static void import_marks(char *input_file)
+static void import_marks(char *input_file, int check_exists)
 {
 	char line[512];
-	FILE *f = xfopen(input_file, "r");
+	FILE *f;
+	struct stat sb;
 
+	if (check_exists && stat(input_file, &sb))
+		return;
+
+	f = xfopen(input_file, "r");
 	while (fgets(line, sizeof(line), f)) {
 		uint32_t mark;
 		char *line_end, *mark_end;
@@ -1115,7 +1141,9 @@
 	struct rev_info revs;
 	struct object_array commits = OBJECT_ARRAY_INIT;
 	struct commit *commit;
-	char *export_filename = NULL, *import_filename = NULL;
+	char *export_filename = NULL,
+	     *import_filename = NULL,
+	     *import_filename_if_exists = NULL;
 	uint32_t lastimportid;
 	struct string_list refspecs_list = STRING_LIST_INIT_NODUP;
 	struct string_list paths_of_changed_objects = STRING_LIST_INIT_DUP;
@@ -1135,6 +1163,10 @@
 			     N_("Dump marks to this file")),
 		OPT_STRING(0, "import-marks", &import_filename, N_("file"),
 			     N_("Import marks from this file")),
+		OPT_STRING(0, "import-marks-if-exists",
+			     &import_filename_if_exists,
+			     N_("file"),
+			     N_("Import marks from this file if it exists")),
 		OPT_BOOL(0, "fake-missing-tagger", &fake_missing_tagger,
 			 N_("Fake a tagger when tags lack one")),
 		OPT_BOOL(0, "full-tree", &full_tree,
@@ -1149,6 +1181,8 @@
 			 &reference_excluded_commits, N_("Reference parents which are not in fast-export stream by object id")),
 		OPT_BOOL(0, "show-original-ids", &show_original_ids,
 			    N_("Show original object ids of blobs/commits")),
+		OPT_BOOL(0, "mark-tags", &mark_tags,
+			    N_("Label tags with mark ids")),
 
 		OPT_END()
 	};
@@ -1182,8 +1216,12 @@
 	if (use_done_feature)
 		printf("feature done\n");
 
+	if (import_filename && import_filename_if_exists)
+		die(_("Cannot pass both --import-marks and --import-marks-if-exists"));
 	if (import_filename)
-		import_marks(import_filename);
+		import_marks(import_filename, 0);
+	else if (import_filename_if_exists)
+		import_marks(import_filename_if_exists, 1);
 	lastimportid = last_idnum;
 
 	if (import_filename && revs.prune_data.nr)
diff --git a/builtin/fetch.c b/builtin/fetch.c
index 24d382b..ef73163 100644
--- a/builtin/fetch.c
+++ b/builtin/fetch.c
@@ -59,7 +59,8 @@
 static int progress = -1;
 static int enable_auto_gc = 1;
 static int tags = TAGS_DEFAULT, unshallow, update_shallow, deepen;
-static int max_children = 1;
+static int max_jobs = -1, submodule_fetch_jobs_config = -1;
+static int fetch_parallel_config = 1;
 static enum transport_family family;
 static const char *depth;
 static const char *deepen_since;
@@ -101,13 +102,20 @@
 	}
 
 	if (!strcmp(k, "submodule.fetchjobs")) {
-		max_children = parse_submodule_fetchjobs(k, v);
+		submodule_fetch_jobs_config = parse_submodule_fetchjobs(k, v);
 		return 0;
 	} else if (!strcmp(k, "fetch.recursesubmodules")) {
 		recurse_submodules = parse_fetch_recurse_submodules_arg(k, v);
 		return 0;
 	}
 
+	if (!strcmp(k, "fetch.parallel")) {
+		fetch_parallel_config = git_config_int(k, v);
+		if (fetch_parallel_config < 0)
+			die(_("fetch.parallel cannot be negative"));
+		return 0;
+	}
+
 	return git_default_config(k, v, cb);
 }
 
@@ -141,7 +149,7 @@
 		    N_("fetch all tags and associated objects"), TAGS_SET),
 	OPT_SET_INT('n', NULL, &tags,
 		    N_("do not fetch all tags (--no-tags)"), TAGS_UNSET),
-	OPT_INTEGER('j', "jobs", &max_children,
+	OPT_INTEGER('j', "jobs", &max_jobs,
 		    N_("number of submodules fetched in parallel")),
 	OPT_BOOL('p', "prune", &prune,
 		 N_("prune remote-tracking branches no longer on remote")),
@@ -256,20 +264,21 @@
 }
 
 struct refname_hash_entry {
-	struct hashmap_entry ent; /* must be the first member */
+	struct hashmap_entry ent;
 	struct object_id oid;
 	int ignore;
 	char refname[FLEX_ARRAY];
 };
 
 static int refname_hash_entry_cmp(const void *hashmap_cmp_fn_data,
-				  const void *e1_,
-				  const void *e2_,
+				  const struct hashmap_entry *eptr,
+				  const struct hashmap_entry *entry_or_key,
 				  const void *keydata)
 {
-	const struct refname_hash_entry *e1 = e1_;
-	const struct refname_hash_entry *e2 = e2_;
+	const struct refname_hash_entry *e1, *e2;
 
+	e1 = container_of(eptr, const struct refname_hash_entry, ent);
+	e2 = container_of(entry_or_key, const struct refname_hash_entry, ent);
 	return strcmp(e1->refname, keydata ? keydata : e2->refname);
 }
 
@@ -281,9 +290,9 @@
 	size_t len = strlen(refname);
 
 	FLEX_ALLOC_MEM(ent, refname, refname, len);
-	hashmap_entry_init(ent, strhash(refname));
+	hashmap_entry_init(&ent->ent, strhash(refname));
 	oidcpy(&ent->oid, oid);
-	hashmap_add(map, ent);
+	hashmap_add(map, &ent->ent);
 	return ent;
 }
 
@@ -372,7 +381,7 @@
 		item = refname_hash_add(&remote_refs, ref->name, &ref->old_oid);
 		string_list_insert(&remote_refs_list, ref->name);
 	}
-	hashmap_free(&existing_refs, 1);
+	hashmap_free_entries(&existing_refs, struct refname_hash_entry, ent);
 
 	/*
 	 * We may have a final lightweight tag that needs to be
@@ -390,8 +399,10 @@
 	for_each_string_list_item(remote_ref_item, &remote_refs_list) {
 		const char *refname = remote_ref_item->string;
 		struct ref *rm;
+		unsigned int hash = strhash(refname);
 
-		item = hashmap_get_from_hash(&remote_refs, strhash(refname), refname);
+		item = hashmap_get_entry_from_hash(&remote_refs, hash, refname,
+					struct refname_hash_entry, ent);
 		if (!item)
 			BUG("unseen remote ref?");
 
@@ -405,7 +416,7 @@
 		**tail = rm;
 		*tail = &rm->next;
 	}
-	hashmap_free(&remote_refs, 1);
+	hashmap_free_entries(&remote_refs, struct refname_hash_entry, ent);
 	string_list_clear(&remote_refs_list, 0);
 	oidset_clear(&fetch_oids);
 }
@@ -524,17 +535,18 @@
 		if (rm->peer_ref) {
 			const char *refname = rm->peer_ref->name;
 			struct refname_hash_entry *peer_item;
+			unsigned int hash = strhash(refname);
 
-			peer_item = hashmap_get_from_hash(&existing_refs,
-							  strhash(refname),
-							  refname);
+			peer_item = hashmap_get_entry_from_hash(&existing_refs,
+						hash, refname,
+						struct refname_hash_entry, ent);
 			if (peer_item) {
 				struct object_id *old_oid = &peer_item->oid;
 				oidcpy(&rm->peer_ref->old_oid, old_oid);
 			}
 		}
 	}
-	hashmap_free(&existing_refs, 1);
+	hashmap_free_entries(&existing_refs, struct refname_hash_entry, ent);
 
 	return ref_map;
 }
@@ -1513,7 +1525,62 @@
 
 }
 
-static int fetch_multiple(struct string_list *list)
+/* Fetch multiple remotes in parallel */
+
+struct parallel_fetch_state {
+	const char **argv;
+	struct string_list *remotes;
+	int next, result;
+};
+
+static int fetch_next_remote(struct child_process *cp, struct strbuf *out,
+			     void *cb, void **task_cb)
+{
+	struct parallel_fetch_state *state = cb;
+	char *remote;
+
+	if (state->next < 0 || state->next >= state->remotes->nr)
+		return 0;
+
+	remote = state->remotes->items[state->next++].string;
+	*task_cb = remote;
+
+	argv_array_pushv(&cp->args, state->argv);
+	argv_array_push(&cp->args, remote);
+	cp->git_cmd = 1;
+
+	if (verbosity >= 0)
+		printf(_("Fetching %s\n"), remote);
+
+	return 1;
+}
+
+static int fetch_failed_to_start(struct strbuf *out, void *cb, void *task_cb)
+{
+	struct parallel_fetch_state *state = cb;
+	const char *remote = task_cb;
+
+	state->result = error(_("Could not fetch %s"), remote);
+
+	return 0;
+}
+
+static int fetch_finished(int result, struct strbuf *out,
+			  void *cb, void *task_cb)
+{
+	struct parallel_fetch_state *state = cb;
+	const char *remote = task_cb;
+
+	if (result) {
+		strbuf_addf(out, _("could not fetch '%s' (exit code: %d)\n"),
+			    remote, result);
+		state->result = -1;
+	}
+
+	return 0;
+}
+
+static int fetch_multiple(struct string_list *list, int max_children)
 {
 	int i, result = 0;
 	struct argv_array argv = ARGV_ARRAY_INIT;
@@ -1527,20 +1594,34 @@
 	argv_array_pushl(&argv, "fetch", "--append", "--no-auto-gc", NULL);
 	add_options_to_argv(&argv);
 
-	for (i = 0; i < list->nr; i++) {
-		const char *name = list->items[i].string;
-		argv_array_push(&argv, name);
-		if (verbosity >= 0)
-			printf(_("Fetching %s\n"), name);
-		if (run_command_v_opt(argv.argv, RUN_GIT_CMD)) {
-			error(_("Could not fetch %s"), name);
-			result = 1;
+	if (max_children != 1 && list->nr != 1) {
+		struct parallel_fetch_state state = { argv.argv, list, 0, 0 };
+
+		argv_array_push(&argv, "--end-of-options");
+		result = run_processes_parallel_tr2(max_children,
+						    &fetch_next_remote,
+						    &fetch_failed_to_start,
+						    &fetch_finished,
+						    &state,
+						    "fetch", "parallel/fetch");
+
+		if (!result)
+			result = state.result;
+	} else
+		for (i = 0; i < list->nr; i++) {
+			const char *name = list->items[i].string;
+			argv_array_push(&argv, name);
+			if (verbosity >= 0)
+				printf(_("Fetching %s\n"), name);
+			if (run_command_v_opt(argv.argv, RUN_GIT_CMD)) {
+				error(_("Could not fetch %s"), name);
+				result = 1;
+			}
+			argv_array_pop(&argv);
 		}
-		argv_array_pop(&argv);
-	}
 
 	argv_array_clear(&argv);
-	return result;
+	return !!result;
 }
 
 /*
@@ -1673,7 +1754,8 @@
 	for (i = 1; i < argc; i++)
 		strbuf_addf(&default_rla, " %s", argv[i]);
 
-	fetch_config_from_gitmodules(&max_children, &recurse_submodules);
+	fetch_config_from_gitmodules(&submodule_fetch_jobs_config,
+				     &recurse_submodules);
 	git_config(git_fetch_config, NULL);
 
 	argc = parse_options(argc, argv, prefix,
@@ -1739,15 +1821,27 @@
 			fetch_one_setup_partial(remote);
 		result = fetch_one(remote, argc, argv, prune_tags_ok);
 	} else {
+		int max_children = max_jobs;
+
 		if (filter_options.choice)
 			die(_("--filter can only be used with the remote "
 			      "configured in extensions.partialclone"));
+
+		if (max_children < 0)
+			max_children = fetch_parallel_config;
+
 		/* TODO should this also die if we have a previous partial-clone? */
-		result = fetch_multiple(&list);
+		result = fetch_multiple(&list, max_children);
 	}
 
 	if (!result && (recurse_submodules != RECURSE_SUBMODULES_OFF)) {
 		struct argv_array options = ARGV_ARRAY_INIT;
+		int max_children = max_jobs;
+
+		if (max_children < 0)
+			max_children = submodule_fetch_jobs_config;
+		if (max_children < 0)
+			max_children = fetch_parallel_config;
 
 		add_options_to_argv(&options);
 		result = fetch_populated_submodules(the_repository,
diff --git a/builtin/index-pack.c b/builtin/index-pack.c
index a23454d..60a5591 100644
--- a/builtin/index-pack.c
+++ b/builtin/index-pack.c
@@ -1490,11 +1490,11 @@
 	}
 
 	if (!from_stdin) {
-		printf("%s\n", sha1_to_hex(hash));
+		printf("%s\n", hash_to_hex(hash));
 	} else {
 		struct strbuf buf = STRBUF_INIT;
 
-		strbuf_addf(&buf, "%s\t%s\n", report, sha1_to_hex(hash));
+		strbuf_addf(&buf, "%s\t%s\n", report, hash_to_hex(hash));
 		write_or_die(1, buf.buf, buf.len);
 		strbuf_release(&buf);
 
diff --git a/builtin/merge-recursive.c b/builtin/merge-recursive.c
index 5b910e3..a4bfd8f 100644
--- a/builtin/merge-recursive.c
+++ b/builtin/merge-recursive.c
@@ -1,3 +1,4 @@
+#include "cache.h"
 #include "builtin.h"
 #include "commit.h"
 #include "tag.h"
@@ -63,6 +64,9 @@
 	if (argc - i != 3) /* "--" "<head>" "<remote>" */
 		die(_("not handling anything other than two heads merge."));
 
+	if (repo_read_index_unmerged(the_repository))
+		die_resolve_conflict("merge");
+
 	o.branch1 = argv[++i];
 	o.branch2 = argv[++i];
 
diff --git a/builtin/name-rev.c b/builtin/name-rev.c
index c785fe1..b0f0776 100644
--- a/builtin/name-rev.c
+++ b/builtin/name-rev.c
@@ -9,7 +9,11 @@
 #include "sha1-lookup.h"
 #include "commit-slab.h"
 
-#define CUTOFF_DATE_SLOP 86400 /* one day */
+/*
+ * One day.  See the 'name a rev shortly after epoch' test in t6120 when
+ * changing this value
+ */
+#define CUTOFF_DATE_SLOP 86400
 
 typedef struct rev_name {
 	const char *tip_name;
@@ -481,8 +485,13 @@
 		add_object_array(object, *argv, &revs);
 	}
 
-	if (cutoff)
-		cutoff = cutoff - CUTOFF_DATE_SLOP;
+	if (cutoff) {
+		/* check for undeflow */
+		if (cutoff > TIME_MIN + CUTOFF_DATE_SLOP)
+			cutoff = cutoff - CUTOFF_DATE_SLOP;
+		else
+			cutoff = TIME_MIN;
+	}
 	for_each_ref(name_ref, &data);
 
 	if (transform_stdin) {
diff --git a/builtin/patch-id.c b/builtin/patch-id.c
index bd28b80..3059e52 100644
--- a/builtin/patch-id.c
+++ b/builtin/patch-id.c
@@ -1,15 +1,16 @@
+#include "cache.h"
 #include "builtin.h"
 #include "config.h"
 #include "diff.h"
 
 static void flush_current_id(int patchlen, struct object_id *id, struct object_id *result)
 {
-	char name[50];
+	char name[GIT_MAX_HEXSZ + 1];
 
 	if (!patchlen)
 		return;
 
-	memcpy(name, oid_to_hex(id), GIT_SHA1_HEXSZ + 1);
+	memcpy(name, oid_to_hex(id), the_hash_algo->hexsz + 1);
 	printf("%s %s\n", oid_to_hex(result), name);
 }
 
@@ -60,9 +61,9 @@
 {
 	int patchlen = 0, found_next = 0;
 	int before = -1, after = -1;
-	git_SHA_CTX ctx;
+	git_hash_ctx ctx;
 
-	git_SHA1_Init(&ctx);
+	the_hash_algo->init_fn(&ctx);
 	oidclr(result);
 
 	while (strbuf_getwholeline(line_buf, stdin, '\n') != EOF) {
@@ -122,7 +123,7 @@
 		/* Compute the sha without whitespace */
 		len = remove_space(line);
 		patchlen += len;
-		git_SHA1_Update(&ctx, line, len);
+		the_hash_algo->update_fn(&ctx, line, len);
 	}
 
 	if (!found_next)
diff --git a/builtin/push.c b/builtin/push.c
index 3742daf..8729b03 100644
--- a/builtin/push.c
+++ b/builtin/push.c
@@ -143,8 +143,8 @@
 	return remote->url_nr;
 }
 
-static NORETURN int die_push_simple(struct branch *branch,
-				    struct remote *remote)
+static NORETURN void die_push_simple(struct branch *branch,
+				     struct remote *remote)
 {
 	/*
 	 * There's no point in using shorten_unambiguous_ref here,
diff --git a/builtin/receive-pack.c b/builtin/receive-pack.c
index dcf3855..411e0b4 100644
--- a/builtin/receive-pack.c
+++ b/builtin/receive-pack.c
@@ -417,24 +417,22 @@
 	return 0;
 }
 
-#define HMAC_BLOCK_SIZE 64
-
-static void hmac_sha1(unsigned char *out,
+static void hmac(unsigned char *out,
 		      const char *key_in, size_t key_len,
 		      const char *text, size_t text_len)
 {
-	unsigned char key[HMAC_BLOCK_SIZE];
-	unsigned char k_ipad[HMAC_BLOCK_SIZE];
-	unsigned char k_opad[HMAC_BLOCK_SIZE];
+	unsigned char key[GIT_MAX_BLKSZ];
+	unsigned char k_ipad[GIT_MAX_BLKSZ];
+	unsigned char k_opad[GIT_MAX_BLKSZ];
 	int i;
-	git_SHA_CTX ctx;
+	git_hash_ctx ctx;
 
 	/* RFC 2104 2. (1) */
-	memset(key, '\0', HMAC_BLOCK_SIZE);
-	if (HMAC_BLOCK_SIZE < key_len) {
-		git_SHA1_Init(&ctx);
-		git_SHA1_Update(&ctx, key_in, key_len);
-		git_SHA1_Final(key, &ctx);
+	memset(key, '\0', GIT_MAX_BLKSZ);
+	if (the_hash_algo->blksz < key_len) {
+		the_hash_algo->init_fn(&ctx);
+		the_hash_algo->update_fn(&ctx, key_in, key_len);
+		the_hash_algo->final_fn(key, &ctx);
 	} else {
 		memcpy(key, key_in, key_len);
 	}
@@ -446,29 +444,29 @@
 	}
 
 	/* RFC 2104 2. (3) & (4) */
-	git_SHA1_Init(&ctx);
-	git_SHA1_Update(&ctx, k_ipad, sizeof(k_ipad));
-	git_SHA1_Update(&ctx, text, text_len);
-	git_SHA1_Final(out, &ctx);
+	the_hash_algo->init_fn(&ctx);
+	the_hash_algo->update_fn(&ctx, k_ipad, sizeof(k_ipad));
+	the_hash_algo->update_fn(&ctx, text, text_len);
+	the_hash_algo->final_fn(out, &ctx);
 
 	/* RFC 2104 2. (6) & (7) */
-	git_SHA1_Init(&ctx);
-	git_SHA1_Update(&ctx, k_opad, sizeof(k_opad));
-	git_SHA1_Update(&ctx, out, GIT_SHA1_RAWSZ);
-	git_SHA1_Final(out, &ctx);
+	the_hash_algo->init_fn(&ctx);
+	the_hash_algo->update_fn(&ctx, k_opad, sizeof(k_opad));
+	the_hash_algo->update_fn(&ctx, out, the_hash_algo->rawsz);
+	the_hash_algo->final_fn(out, &ctx);
 }
 
 static char *prepare_push_cert_nonce(const char *path, timestamp_t stamp)
 {
 	struct strbuf buf = STRBUF_INIT;
-	unsigned char sha1[GIT_SHA1_RAWSZ];
+	unsigned char hash[GIT_MAX_RAWSZ];
 
 	strbuf_addf(&buf, "%s:%"PRItime, path, stamp);
-	hmac_sha1(sha1, buf.buf, buf.len, cert_nonce_seed, strlen(cert_nonce_seed));
+	hmac(hash, buf.buf, buf.len, cert_nonce_seed, strlen(cert_nonce_seed));
 	strbuf_release(&buf);
 
 	/* RFC 2104 5. HMAC-SHA1-80 */
-	strbuf_addf(&buf, "%"PRItime"-%.*s", stamp, GIT_SHA1_HEXSZ, sha1_to_hex(sha1));
+	strbuf_addf(&buf, "%"PRItime"-%.*s", stamp, (int)the_hash_algo->hexsz, hash_to_hex(hash));
 	return strbuf_detach(&buf, NULL);
 }
 
@@ -970,7 +968,7 @@
 	if (run_command(&child))
 		return "Working directory has staged changes";
 
-	read_tree[3] = sha1_to_hex(sha1);
+	read_tree[3] = hash_to_hex(sha1);
 	child_process_init(&child);
 	child.argv = read_tree;
 	child.env = env->argv;
@@ -987,13 +985,13 @@
 
 static const char *push_to_checkout_hook = "push-to-checkout";
 
-static const char *push_to_checkout(unsigned char *sha1,
+static const char *push_to_checkout(unsigned char *hash,
 				    struct argv_array *env,
 				    const char *work_tree)
 {
 	argv_array_pushf(env, "GIT_WORK_TREE=%s", absolute_path(work_tree));
 	if (run_hook_le(env->argv, push_to_checkout_hook,
-			sha1_to_hex(sha1), NULL))
+			hash_to_hex(hash), NULL))
 		return "push-to-checkout hook declined";
 	else
 		return NULL;
diff --git a/builtin/repack.c b/builtin/repack.c
index 3b3dd14..094c2f8 100644
--- a/builtin/repack.c
+++ b/builtin/repack.c
@@ -191,7 +191,7 @@
 			die(_("could not start pack-objects to repack promisor objects"));
 	}
 
-	xwrite(cmd->in, oid_to_hex(oid), GIT_SHA1_HEXSZ);
+	xwrite(cmd->in, oid_to_hex(oid), the_hash_algo->hexsz);
 	xwrite(cmd->in, "\n", 1);
 	return 0;
 }
diff --git a/builtin/replace.c b/builtin/replace.c
index 2a4afb3..bd92dc6 100644
--- a/builtin/replace.c
+++ b/builtin/replace.c
@@ -272,7 +272,7 @@
 			return error(_("unable to spawn mktree"));
 		}
 
-		if (strbuf_read(&result, cmd.out, 41) < 0) {
+		if (strbuf_read(&result, cmd.out, the_hash_algo->hexsz + 1) < 0) {
 			error_errno(_("unable to read from mktree"));
 			close(fd);
 			close(cmd.out);
@@ -358,14 +358,15 @@
 	struct strbuf new_parents = STRBUF_INIT;
 	const char *parent_start, *parent_end;
 	int i;
+	const unsigned hexsz = the_hash_algo->hexsz;
 
 	/* find existing parents */
 	parent_start = buf->buf;
-	parent_start += GIT_SHA1_HEXSZ + 6; /* "tree " + "hex sha1" + "\n" */
+	parent_start += hexsz + 6; /* "tree " + "hex sha1" + "\n" */
 	parent_end = parent_start;
 
 	while (starts_with(parent_end, "parent "))
-		parent_end += 48; /* "parent " + "hex sha1" + "\n" */
+		parent_end += hexsz + 8; /* "parent " + "hex sha1" + "\n" */
 
 	/* prepare new parents */
 	for (i = 0; i < argc; i++) {
diff --git a/builtin/rev-list.c b/builtin/rev-list.c
index 74dbfad..e28d62e 100644
--- a/builtin/rev-list.c
+++ b/builtin/rev-list.c
@@ -18,7 +18,6 @@
 #include "reflog-walk.h"
 #include "oidset.h"
 #include "packfile.h"
-#include "object-store.h"
 
 static const char rev_list_usage[] =
 "git rev-list [OPTION] <commit-id>... [ -- paths... ]\n"
diff --git a/builtin/rev-parse.c b/builtin/rev-parse.c
index f8bbe6d..308c67e 100644
--- a/builtin/rev-parse.c
+++ b/builtin/rev-parse.c
@@ -593,6 +593,7 @@
 	const char *name = NULL;
 	struct object_context unused;
 	struct strbuf buf = STRBUF_INIT;
+	const int hexsz = the_hash_algo->hexsz;
 
 	if (argc > 1 && !strcmp("--parseopt", argv[1]))
 		return cmd_parseopt(argc - 1, argv + 1, prefix);
@@ -730,8 +731,8 @@
 				abbrev = strtoul(arg, NULL, 10);
 				if (abbrev < MINIMUM_ABBREV)
 					abbrev = MINIMUM_ABBREV;
-				else if (40 <= abbrev)
-					abbrev = 40;
+				else if (hexsz <= abbrev)
+					abbrev = hexsz;
 				continue;
 			}
 			if (!strcmp(arg, "--sq")) {
diff --git a/builtin/show-index.c b/builtin/show-index.c
index a6e6788..0826f6a 100644
--- a/builtin/show-index.c
+++ b/builtin/show-index.c
@@ -11,6 +11,7 @@
 	unsigned nr;
 	unsigned int version;
 	static unsigned int top_index[256];
+	const unsigned hashsz = the_hash_algo->rawsz;
 
 	if (argc != 1)
 		usage(show_index_usage);
@@ -36,23 +37,23 @@
 	}
 	if (version == 1) {
 		for (i = 0; i < nr; i++) {
-			unsigned int offset, entry[6];
+			unsigned int offset, entry[(GIT_MAX_RAWSZ + 4) / sizeof(unsigned int)];
 
-			if (fread(entry, 4 + 20, 1, stdin) != 1)
+			if (fread(entry, 4 + hashsz, 1, stdin) != 1)
 				die("unable to read entry %u/%u", i, nr);
 			offset = ntohl(entry[0]);
-			printf("%u %s\n", offset, sha1_to_hex((void *)(entry+1)));
+			printf("%u %s\n", offset, hash_to_hex((void *)(entry+1)));
 		}
 	} else {
 		unsigned off64_nr = 0;
 		struct {
-			unsigned char sha1[20];
+			struct object_id oid;
 			uint32_t crc;
 			uint32_t off;
 		} *entries;
 		ALLOC_ARRAY(entries, nr);
 		for (i = 0; i < nr; i++)
-			if (fread(entries[i].sha1, 20, 1, stdin) != 1)
+			if (fread(entries[i].oid.hash, hashsz, 1, stdin) != 1)
 				die("unable to read sha1 %u/%u", i, nr);
 		for (i = 0; i < nr; i++)
 			if (fread(&entries[i].crc, 4, 1, stdin) != 1)
@@ -77,7 +78,7 @@
 			}
 			printf("%" PRIuMAX " %s (%08"PRIx32")\n",
 			       (uintmax_t) offset,
-			       sha1_to_hex(entries[i].sha1),
+			       oid_to_hex(&entries[i].oid),
 			       ntohl(entries[i].crc));
 		}
 		free(entries);
diff --git a/builtin/stash.c b/builtin/stash.c
index ab30d1e..bb4f6d8 100644
--- a/builtin/stash.c
+++ b/builtin/stash.c
@@ -427,6 +427,8 @@
 				return error(_("could not save index tree"));
 
 			reset_head();
+			discard_cache();
+			read_cache();
 		}
 	}
 
@@ -497,6 +499,10 @@
 		 */
 		cp.git_cmd = 1;
 		cp.dir = prefix;
+		argv_array_pushf(&cp.env_array, GIT_WORK_TREE_ENVIRONMENT"=%s",
+				 absolute_path(get_git_work_tree()));
+		argv_array_pushf(&cp.env_array, GIT_DIR_ENVIRONMENT"=%s",
+				 absolute_path(get_git_dir()));
 		argv_array_push(&cp.args, "status");
 		run_command(&cp);
 	}
diff --git a/builtin/submodule--helper.c b/builtin/submodule--helper.c
index 89d6f02..2c2395a 100644
--- a/builtin/submodule--helper.c
+++ b/builtin/submodule--helper.c
@@ -424,7 +424,7 @@
 		const struct cache_entry *ce = list.entries[i];
 
 		if (ce_stage(ce))
-			printf("%06o %s U\t", ce->ce_mode, sha1_to_hex(null_sha1));
+			printf("%06o %s U\t", ce->ce_mode, oid_to_hex(&null_oid));
 		else
 			printf("%06o %s %d\t", ce->ce_mode,
 			       oid_to_hex(&ce->oid), ce_stage(ce));
diff --git a/builtin/worktree.c b/builtin/worktree.c
index 7f094f8..4de44f5 100644
--- a/builtin/worktree.c
+++ b/builtin/worktree.c
@@ -10,7 +10,6 @@
 #include "run-command.h"
 #include "sigchain.h"
 #include "submodule.h"
-#include "refs.h"
 #include "utf8.h"
 #include "worktree.h"
 
@@ -350,7 +349,7 @@
 	 */
 	strbuf_reset(&sb);
 	strbuf_addf(&sb, "%s/HEAD", sb_repo.buf);
-	write_file(sb.buf, "%s", sha1_to_hex(null_sha1));
+	write_file(sb.buf, "%s", oid_to_hex(&null_oid));
 	strbuf_reset(&sb);
 	strbuf_addf(&sb, "%s/commondir", sb_repo.buf);
 	write_file(sb.buf, "../..");
diff --git a/bundle.c b/bundle.c
index b5d21cd..a85ed3f 100644
--- a/bundle.c
+++ b/bundle.c
@@ -282,7 +282,7 @@
 		struct object *object = revs->pending.objects[i].item;
 		if (object->flags & UNINTERESTING)
 			write_or_die(pack_objects.in, "^", 1);
-		write_or_die(pack_objects.in, oid_to_hex(&object->oid), GIT_SHA1_HEXSZ);
+		write_or_die(pack_objects.in, oid_to_hex(&object->oid), the_hash_algo->hexsz);
 		write_or_die(pack_objects.in, "\n", 1);
 	}
 	close(pack_objects.in);
@@ -414,7 +414,7 @@
 		}
 
 		ref_count++;
-		write_or_die(bundle_fd, oid_to_hex(&e->item->oid), 40);
+		write_or_die(bundle_fd, oid_to_hex(&e->item->oid), the_hash_algo->hexsz);
 		write_or_die(bundle_fd, " ", 1);
 		write_or_die(bundle_fd, display_ref, strlen(display_ref));
 		write_or_die(bundle_fd, "\n", 1);
diff --git a/cache-tree.c b/cache-tree.c
index 62edee4..1bd1b23 100644
--- a/cache-tree.c
+++ b/cache-tree.c
@@ -609,11 +609,66 @@
 	return it;
 }
 
+static int write_index_as_tree_internal(struct object_id *oid,
+					struct index_state *index_state,
+					int cache_tree_valid,
+					int flags,
+					const char *prefix)
+{
+	if (flags & WRITE_TREE_IGNORE_CACHE_TREE) {
+		cache_tree_free(&index_state->cache_tree);
+		cache_tree_valid = 0;
+	}
+
+	if (!index_state->cache_tree)
+		index_state->cache_tree = cache_tree();
+
+	if (!cache_tree_valid && cache_tree_update(index_state, flags) < 0)
+		return WRITE_TREE_UNMERGED_INDEX;
+
+	if (prefix) {
+		struct cache_tree *subtree;
+		subtree = cache_tree_find(index_state->cache_tree, prefix);
+		if (!subtree)
+			return WRITE_TREE_PREFIX_ERROR;
+		oidcpy(oid, &subtree->oid);
+	}
+	else
+		oidcpy(oid, &index_state->cache_tree->oid);
+
+	return 0;
+}
+
+struct tree* write_in_core_index_as_tree(struct repository *repo) {
+	struct object_id o;
+	int was_valid, ret;
+
+	struct index_state *index_state	= repo->index;
+	was_valid = index_state->cache_tree &&
+		    cache_tree_fully_valid(index_state->cache_tree);
+
+	ret = write_index_as_tree_internal(&o, index_state, was_valid, 0, NULL);
+	if (ret == WRITE_TREE_UNMERGED_INDEX) {
+		int i;
+		fprintf(stderr, "BUG: There are unmerged index entries:\n");
+		for (i = 0; i < index_state->cache_nr; i++) {
+			const struct cache_entry *ce = index_state->cache[i];
+			if (ce_stage(ce))
+				fprintf(stderr, "BUG: %d %.*s\n", ce_stage(ce),
+					(int)ce_namelen(ce), ce->name);
+		}
+		BUG("unmerged index entries when writing inmemory index");
+	}
+
+	return lookup_tree(repo, &index_state->cache_tree->oid);
+}
+
+
 int write_index_as_tree(struct object_id *oid, struct index_state *index_state, const char *index_path, int flags, const char *prefix)
 {
 	int entries, was_valid;
 	struct lock_file lock_file = LOCK_INIT;
-	int ret = 0;
+	int ret;
 
 	hold_lock_file_for_update(&lock_file, index_path, LOCK_DIE_ON_ERROR);
 
@@ -622,18 +677,14 @@
 		ret = WRITE_TREE_UNREADABLE_INDEX;
 		goto out;
 	}
-	if (flags & WRITE_TREE_IGNORE_CACHE_TREE)
-		cache_tree_free(&index_state->cache_tree);
 
-	if (!index_state->cache_tree)
-		index_state->cache_tree = cache_tree();
+	was_valid = !(flags & WRITE_TREE_IGNORE_CACHE_TREE) &&
+		    index_state->cache_tree &&
+		    cache_tree_fully_valid(index_state->cache_tree);
 
-	was_valid = cache_tree_fully_valid(index_state->cache_tree);
-	if (!was_valid) {
-		if (cache_tree_update(index_state, flags) < 0) {
-			ret = WRITE_TREE_UNMERGED_INDEX;
-			goto out;
-		}
+	ret = write_index_as_tree_internal(oid, index_state, was_valid, flags,
+					   prefix);
+	if (!ret && !was_valid) {
 		write_locked_index(index_state, &lock_file, COMMIT_LOCK);
 		/* Not being able to write is fine -- we are only interested
 		 * in updating the cache-tree part, and if the next caller
@@ -643,18 +694,6 @@
 		 */
 	}
 
-	if (prefix) {
-		struct cache_tree *subtree;
-		subtree = cache_tree_find(index_state->cache_tree, prefix);
-		if (!subtree) {
-			ret = WRITE_TREE_PREFIX_ERROR;
-			goto out;
-		}
-		oidcpy(oid, &subtree->oid);
-	}
-	else
-		oidcpy(oid, &index_state->cache_tree->oid);
-
 out:
 	rollback_lock_file(&lock_file);
 	return ret;
diff --git a/cache-tree.h b/cache-tree.h
index 757bbc4..639bfa5 100644
--- a/cache-tree.h
+++ b/cache-tree.h
@@ -34,7 +34,7 @@
 int cache_tree_update(struct index_state *, int);
 void cache_tree_verify(struct repository *, struct index_state *);
 
-/* bitmasks to write_cache_as_tree flags */
+/* bitmasks to write_index_as_tree flags */
 #define WRITE_TREE_MISSING_OK 1
 #define WRITE_TREE_IGNORE_CACHE_TREE 2
 #define WRITE_TREE_DRY_RUN 4
@@ -46,6 +46,7 @@
 #define WRITE_TREE_UNMERGED_INDEX (-2)
 #define WRITE_TREE_PREFIX_ERROR (-3)
 
+struct tree* write_in_core_index_as_tree(struct repository *repo);
 int write_index_as_tree(struct object_id *oid, struct index_state *index_state, const char *index_path, int flags, const char *prefix);
 void prime_cache_tree(struct repository *, struct index_state *, struct tree *);
 
diff --git a/cache.h b/cache.h
index 1e64b13..04cabaa 100644
--- a/cache.h
+++ b/cache.h
@@ -748,6 +748,19 @@
  */
 int index_name_pos(const struct index_state *, const char *name, int namelen);
 
+/*
+ * Some functions return the negative complement of an insert position when a
+ * precise match was not found but a position was found where the entry would
+ * need to be inserted. This helper protects that logic from any integer
+ * underflow.
+ */
+static inline int index_pos_to_insert_pos(uintmax_t pos)
+{
+	if (pos > INT_MAX)
+		die("overflow: -1 - %"PRIuMAX, pos);
+	return -1 - (int)pos;
+}
+
 #define ADD_CACHE_OK_TO_ADD 1		/* Ok to add */
 #define ADD_CACHE_OK_TO_REPLACE 2	/* Ok to replace file/directory */
 #define ADD_CACHE_SKIP_DFCHECK 4	/* Ok to skip DF conflict checks */
@@ -1067,7 +1080,6 @@
 int repo_find_unique_abbrev_r(struct repository *r, char *hex, const struct object_id *oid, int len);
 #define find_unique_abbrev_r(hex, oid, len) repo_find_unique_abbrev_r(the_repository, hex, oid, len)
 
-extern const unsigned char null_sha1[GIT_MAX_RAWSZ];
 extern const struct object_id null_oid;
 
 static inline int hashcmp(const unsigned char *sha1, const unsigned char *sha2)
@@ -1102,14 +1114,9 @@
 	return hasheq(oid1->hash, oid2->hash);
 }
 
-static inline int is_null_sha1(const unsigned char *sha1)
-{
-	return hasheq(sha1, null_sha1);
-}
-
 static inline int is_null_oid(const struct object_id *oid)
 {
-	return hasheq(oid->hash, null_sha1);
+	return oideq(oid, &null_oid);
 }
 
 static inline void hashcpy(unsigned char *sha_dst, const unsigned char *sha_src)
diff --git a/combine-diff.c b/combine-diff.c
index 3e49f3b..d5c4d83 100644
--- a/combine-diff.c
+++ b/combine-diff.c
@@ -930,7 +930,7 @@
 				 int show_file_header)
 {
 	struct diff_options *opt = &rev->diffopt;
-	int abbrev = opt->flags.full_index ? GIT_SHA1_HEXSZ : DEFAULT_ABBREV;
+	int abbrev = opt->flags.full_index ? the_hash_algo->hexsz : DEFAULT_ABBREV;
 	const char *a_prefix = opt->a_prefix ? opt->a_prefix : "a/";
 	const char *b_prefix = opt->b_prefix ? opt->b_prefix : "b/";
 	const char *c_meta = diff_get_color_opt(opt, DIFF_METAINFO);
diff --git a/commit-graph.c b/commit-graph.c
index 6ae2009..fc4a43b 100644
--- a/commit-graph.c
+++ b/commit-graph.c
@@ -1533,8 +1533,8 @@
 
 static void split_graph_merge_strategy(struct write_commit_graph_context *ctx)
 {
-	struct commit_graph *g = ctx->r->objects->commit_graph;
-	uint32_t num_commits = ctx->commits.nr;
+	struct commit_graph *g;
+	uint32_t num_commits;
 	uint32_t i;
 
 	int max_commits = 0;
@@ -1546,6 +1546,7 @@
 	}
 
 	g = ctx->r->objects->commit_graph;
+	num_commits = ctx->commits.nr;
 	ctx->num_commit_graphs_after = ctx->num_commit_graphs_before + 1;
 
 	while (g && (g->num_commits <= size_mult * num_commits ||
diff --git a/compat/mingw.c b/compat/mingw.c
index 7a0d619..6b765d9 100644
--- a/compat/mingw.c
+++ b/compat/mingw.c
@@ -1236,11 +1236,6 @@
 	return _wcsnicmp(p, q, p_len);
 }
 
-/* We need a stable sort to convert the environment between UTF-16 <-> UTF-8 */
-#ifndef INTERNAL_QSORT
-#include "qsort.c"
-#endif
-
 /*
  * Build an environment block combining the inherited environment
  * merged with the given list of settings.
@@ -1279,8 +1274,8 @@
 
 	/*
 	 * If there is a deltaenv, let's accumulate all keys into `array`,
-	 * sort them using the stable git_qsort() and then copy, skipping
-	 * duplicate keys
+	 * sort them using the stable git_stable_qsort() and then copy,
+	 * skipping duplicate keys
 	 */
 	for (p = wenv; p && *p; ) {
 		ALLOC_GROW(array, nr + 1, alloc);
@@ -1303,7 +1298,7 @@
 		p += wlen + 1;
 	}
 
-	git_qsort(array, nr, sizeof(*array), wenvcmp);
+	git_stable_qsort(array, nr, sizeof(*array), wenvcmp);
 	ALLOC_ARRAY(result, size + delta_size);
 
 	for (p = result, i = 0; i < nr; i++) {
@@ -1670,6 +1665,8 @@
 	if (!w_key)
 		die("Out of memory, (tried to allocate %u wchar_t's)", len_key);
 	xutftowcs(w_key, name, len_key);
+	/* GetEnvironmentVariableW() only sets the last error upon failure */
+	SetLastError(ERROR_SUCCESS);
 	len_value = GetEnvironmentVariableW(w_key, w_value, ARRAY_SIZE(w_value));
 	if (!len_value && GetLastError() == ERROR_ENVVAR_NOT_FOUND) {
 		free(w_key);
diff --git a/compat/vcbuild/scripts/clink.pl b/compat/vcbuild/scripts/clink.pl
index c7b021b..ec95a3b 100755
--- a/compat/vcbuild/scripts/clink.pl
+++ b/compat/vcbuild/scripts/clink.pl
@@ -68,8 +68,54 @@
 	} elsif ("$arg" =~ /^-L/ && "$arg" ne "-LTCG") {
 		$arg =~ s/^-L/-LIBPATH:/;
 		push(@lflags, $arg);
-	} elsif ("$arg" =~ /^-R/) {
+	} elsif ("$arg" =~ /^-[Rl]/) {
 		# eat
+	} elsif ("$arg" eq "-Werror") {
+		push(@cflags, "-WX");
+	} elsif ("$arg" eq "-Wall") {
+		# cl.exe understands -Wall, but it is really overzealous
+		push(@cflags, "-W4");
+		# disable the "signed/unsigned mismatch" warnings; our source code violates that
+		push(@cflags, "-wd4018");
+		push(@cflags, "-wd4245");
+		push(@cflags, "-wd4389");
+		# disable the "unreferenced formal parameter" warning; our source code violates that
+		push(@cflags, "-wd4100");
+		# disable the "conditional expression is constant" warning; our source code violates that
+		push(@cflags, "-wd4127");
+		# disable the "const object should be initialized" warning; these warnings affect only objects that are `static`
+		push(@cflags, "-wd4132");
+		# disable the "function/data pointer conversion in expression" warning; our source code violates that
+		push(@cflags, "-wd4152");
+		# disable the "non-constant aggregate initializer" warning; our source code violates that
+		push(@cflags, "-wd4204");
+		# disable the "cannot be initialized using address of automatic variable" warning; our source code violates that
+		push(@cflags, "-wd4221");
+		# disable the "possible loss of data" warnings; our source code violates that
+		push(@cflags, "-wd4244");
+		push(@cflags, "-wd4267");
+		# disable the "array is too small to include a terminating null character" warning; we ab-use strings to initialize OIDs
+		push(@cflags, "-wd4295");
+		# disable the "'<<': result of 32-bit shift implicitly converted to 64 bits" warning; our source code violates that
+		push(@cflags, "-wd4334");
+		# disable the "declaration hides previous local declaration" warning; our source code violates that
+		push(@cflags, "-wd4456");
+		# disable the "declaration hides function parameter" warning; our source code violates that
+		push(@cflags, "-wd4457");
+		# disable the "declaration hides global declaration" warning; our source code violates that
+		push(@cflags, "-wd4459");
+		# disable the "potentially uninitialized local variable '<name>' used" warning; our source code violates that
+		push(@cflags, "-wd4701");
+		# disable the "unreachable code" warning; our source code violates that
+		push(@cflags, "-wd4702");
+		# disable the "potentially uninitialized local pointer variable used" warning; our source code violates that
+		push(@cflags, "-wd4703");
+		# disable the "assignment within conditional expression" warning; our source code violates that
+		push(@cflags, "-wd4706");
+		# disable the "'inet_ntoa': Use inet_ntop() or InetNtop() instead" warning; our source code violates that
+		push(@cflags, "-wd4996");
+	} elsif ("$arg" =~ /^-W[a-z]/) {
+		# let's ignore those
 	} else {
 		push(@args, $arg);
 	}
diff --git a/compat/win32/path-utils.h b/compat/win32/path-utils.h
index 0f70d43..8ed062a 100644
--- a/compat/win32/path-utils.h
+++ b/compat/win32/path-utils.h
@@ -1,3 +1,6 @@
+#ifndef WIN32_PATH_UTILS_H
+#define WIN32_PATH_UTILS_H
+
 #define has_dos_drive_prefix(path) \
 	(isalpha(*(path)) && (path)[1] == ':' ? 2 : 0)
 int win32_skip_dos_drive_prefix(char **path);
@@ -18,3 +21,5 @@
 #define find_last_dir_sep win32_find_last_dir_sep
 int win32_offset_1st_component(const char *path);
 #define offset_1st_component win32_offset_1st_component
+
+#endif
diff --git a/compat/winansi.c b/compat/winansi.c
index cacd82c..54fd701 100644
--- a/compat/winansi.c
+++ b/compat/winansi.c
@@ -546,7 +546,7 @@
 typedef struct _OBJECT_NAME_INFORMATION
 {
 	UNICODE_STRING Name;
-	WCHAR NameBuffer[0];
+	WCHAR NameBuffer[FLEX_ARRAY];
 } OBJECT_NAME_INFORMATION, *POBJECT_NAME_INFORMATION;
 
 #define ObjectNameInformation 1
diff --git a/config.c b/config.c
index 743e457..e7052b3 100644
--- a/config.c
+++ b/config.c
@@ -1204,7 +1204,7 @@
 			default_abbrev = -1;
 		else {
 			int abbrev = git_config_int(var, value);
-			if (abbrev < minimum_abbrev || abbrev > 40)
+			if (abbrev < minimum_abbrev || abbrev > the_hash_algo->hexsz)
 				return error(_("abbrev length out of range: %d"), abbrev);
 			default_abbrev = abbrev;
 		}
@@ -1856,9 +1856,9 @@
 	if (git_config_parse_key(key, &normalized_key, NULL))
 		return NULL;
 
-	hashmap_entry_init(&k, strhash(normalized_key));
+	hashmap_entry_init(&k.ent, strhash(normalized_key));
 	k.key = normalized_key;
-	found_entry = hashmap_get(&cs->config_hash, &k, NULL);
+	found_entry = hashmap_get_entry(&cs->config_hash, &k, ent, NULL);
 	free(normalized_key);
 	return found_entry;
 }
@@ -1877,10 +1877,10 @@
 	 */
 	if (!e) {
 		e = xmalloc(sizeof(*e));
-		hashmap_entry_init(e, strhash(key));
+		hashmap_entry_init(&e->ent, strhash(key));
 		e->key = xstrdup(key);
 		string_list_init(&e->value_list, 1);
-		hashmap_add(&cs->config_hash, e);
+		hashmap_add(&cs->config_hash, &e->ent);
 	}
 	si = string_list_append_nodup(&e->value_list, xstrdup_or_null(value));
 
@@ -1908,12 +1908,14 @@
 }
 
 static int config_set_element_cmp(const void *unused_cmp_data,
-				  const void *entry,
-				  const void *entry_or_key,
+				  const struct hashmap_entry *eptr,
+				  const struct hashmap_entry *entry_or_key,
 				  const void *unused_keydata)
 {
-	const struct config_set_element *e1 = entry;
-	const struct config_set_element *e2 = entry_or_key;
+	const struct config_set_element *e1, *e2;
+
+	e1 = container_of(eptr, const struct config_set_element, ent);
+	e2 = container_of(entry_or_key, const struct config_set_element, ent);
 
 	return strcmp(e1->key, e2->key);
 }
@@ -1934,12 +1936,12 @@
 	if (!cs->hash_initialized)
 		return;
 
-	hashmap_iter_init(&cs->config_hash, &iter);
-	while ((entry = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(&cs->config_hash, &iter, entry,
+				ent /* member name */) {
 		free(entry->key);
 		string_list_clear(&entry->value_list, 1);
 	}
-	hashmap_free(&cs->config_hash, 1);
+	hashmap_free_entries(&cs->config_hash, struct config_set_element, ent);
 	cs->hash_initialized = 0;
 	free(cs->list.items);
 	cs->list.nr = 0;
diff --git a/config.mak.uname b/config.mak.uname
index db7f06b..cc8efd9 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -703,20 +703,24 @@
 	perl contrib/buildsystems/generate -g Vcxproj
 	git add -f git.sln {*,*/lib,t/helper/*}/*.vcxproj
 
-	# Generate the LinkOrCopyBuiltins.targets file
+	# Generate the LinkOrCopyBuiltins.targets and LinkOrCopyRemoteHttp.targets file
 	(echo '<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">' && \
 	 echo '  <Target Name="CopyBuiltins_AfterBuild" AfterTargets="AfterBuild">' && \
 	 for name in $(BUILT_INS);\
 	 do \
 	   echo '    <Copy SourceFiles="$$(OutDir)\git.exe" DestinationFiles="$$(OutDir)\'"$$name"'" SkipUnchangedFiles="true" UseHardlinksIfPossible="true" />'; \
 	 done && \
+	 echo '  </Target>' && \
+	 echo '</Project>') >git/LinkOrCopyBuiltins.targets
+	(echo '<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">' && \
+	 echo '  <Target Name="CopyBuiltins_AfterBuild" AfterTargets="AfterBuild">' && \
 	 for name in $(REMOTE_CURL_ALIASES); \
 	 do \
 	   echo '    <Copy SourceFiles="$$(OutDir)\'"$(REMOTE_CURL_PRIMARY)"'" DestinationFiles="$$(OutDir)\'"$$name"'" SkipUnchangedFiles="true" UseHardlinksIfPossible="true" />'; \
 	 done && \
 	 echo '  </Target>' && \
-	 echo '</Project>') >git/LinkOrCopyBuiltins.targets
-	git add -f git/LinkOrCopyBuiltins.targets
+	 echo '</Project>') >git-remote-http/LinkOrCopyRemoteHttp.targets
+	git add -f git/LinkOrCopyBuiltins.targets git-remote-http/LinkOrCopyRemoteHttp.targets
 
 	# Add command-list.h
 	$(MAKE) MSVC=1 SKIP_VCPKG=1 prefix=/mingw64 command-list.h
@@ -724,11 +728,10 @@
 
 	# Add scripts
 	rm -f perl/perl.mak
-	$(MAKE) MSVC=1 SKIP_VCPKG=1 prefix=/mingw64 \
-		$(SCRIPT_LIB) $(SCRIPT_SH_GEN) $(SCRIPT_PERL_GEN)
+	$(MAKE) MSVC=1 SKIP_VCPKG=1 prefix=/mingw64 $(SCRIPT_LIB) $(SCRIPTS)
 	# Strip out the sane tool path, needed only for building
 	sed -i '/^git_broken_path_fix ".*/d' git-sh-setup
-	git add -f $(SCRIPT_LIB) $(SCRIPT_SH_GEN) $(SCRIPT_PERL_GEN)
+	git add -f $(SCRIPT_LIB) $(SCRIPTS)
 
 	# Add Perl module
 	$(MAKE) $(LIB_PERL_GEN)
@@ -758,6 +761,10 @@
 	$(MAKE) -C templates
 	git add -f templates/boilerplates.made templates/blt/
 
+	# Add the translated messages
+	make MSVC=1 SKIP_VCPKG=1 prefix=/mingw64 $(MOFILES)
+	git add -f $(MOFILES)
+
 	# Add build options
 	$(MAKE) MSVC=1 SKIP_VCPKG=1 prefix=/mingw64 GIT-BUILD-OPTIONS
 	git add -f GIT-BUILD-OPTIONS
diff --git a/connected.c b/connected.c
index 971db00..36c4e5d 100644
--- a/connected.c
+++ b/connected.c
@@ -29,6 +29,7 @@
 	struct packed_git *new_pack = NULL;
 	struct transport *transport;
 	size_t base_len;
+	const unsigned hexsz = the_hash_algo->hexsz;
 
 	if (!opt)
 		opt = &defaults;
@@ -100,7 +101,7 @@
 
 	sigchain_push(SIGPIPE, SIG_IGN);
 
-	commit[GIT_SHA1_HEXSZ] = '\n';
+	commit[hexsz] = '\n';
 	do {
 		/*
 		 * If index-pack already checked that:
@@ -113,8 +114,8 @@
 		if (new_pack && find_pack_entry_one(oid.hash, new_pack))
 			continue;
 
-		memcpy(commit, oid_to_hex(&oid), GIT_SHA1_HEXSZ);
-		if (write_in_full(rev_list.in, commit, GIT_SHA1_HEXSZ + 1) < 0) {
+		memcpy(commit, oid_to_hex(&oid), hexsz);
+		if (write_in_full(rev_list.in, commit, hexsz + 1) < 0) {
 			if (errno != EPIPE && errno != EINVAL)
 				error_errno(_("failed write to rev-list"));
 			err = -1;
diff --git a/contrib/buildsystems/Generators/Vcxproj.pm b/contrib/buildsystems/Generators/Vcxproj.pm
index 576ccab..5c666f9 100644
--- a/contrib/buildsystems/Generators/Vcxproj.pm
+++ b/contrib/buildsystems/Generators/Vcxproj.pm
@@ -79,7 +79,8 @@
     if (!$static_library) {
       $libs_release = join(";", sort(grep /^(?!libgit\.lib|xdiff\/lib\.lib|vcs-svn\/lib\.lib)/, @{$$build_structure{"$prefix${name}_LIBS"}}));
       $libs_debug = $libs_release;
-      $libs_debug =~ s/zlib\.lib/zlibd\.lib/;
+      $libs_debug =~ s/zlib\.lib/zlibd\.lib/g;
+      $libs_debug =~ s/libcurl\.lib/libcurl-d\.lib/g;
     }
 
     $defines =~ s/-D//g;
@@ -119,13 +120,13 @@
     <VCPKGArch Condition="'\$(Platform)'=='Win32'">x86-windows</VCPKGArch>
     <VCPKGArch Condition="'\$(Platform)'!='Win32'">x64-windows</VCPKGArch>
     <VCPKGArchDirectory>$cdup\\compat\\vcbuild\\vcpkg\\installed\\\$(VCPKGArch)</VCPKGArchDirectory>
-    <VCPKGBinDirectory Condition="'\(Configuration)'=='Debug'">\$(VCPKGArchDirectory)\\debug\\bin</VCPKGBinDirectory>
-    <VCPKGLibDirectory Condition="'\(Configuration)'=='Debug'">\$(VCPKGArchDirectory)\\debug\\lib</VCPKGLibDirectory>
-    <VCPKGBinDirectory Condition="'\(Configuration)'!='Debug'">\$(VCPKGArchDirectory)\\bin</VCPKGBinDirectory>
-    <VCPKGLibDirectory Condition="'\(Configuration)'!='Debug'">\$(VCPKGArchDirectory)\\lib</VCPKGLibDirectory>
+    <VCPKGBinDirectory Condition="'\$(Configuration)'=='Debug'">\$(VCPKGArchDirectory)\\debug\\bin</VCPKGBinDirectory>
+    <VCPKGLibDirectory Condition="'\$(Configuration)'=='Debug'">\$(VCPKGArchDirectory)\\debug\\lib</VCPKGLibDirectory>
+    <VCPKGBinDirectory Condition="'\$(Configuration)'!='Debug'">\$(VCPKGArchDirectory)\\bin</VCPKGBinDirectory>
+    <VCPKGLibDirectory Condition="'\$(Configuration)'!='Debug'">\$(VCPKGArchDirectory)\\lib</VCPKGLibDirectory>
     <VCPKGIncludeDirectory>\$(VCPKGArchDirectory)\\include</VCPKGIncludeDirectory>
-    <VCPKGLibs Condition="'\(Configuration)'=='Debug'">$libs_debug</VCPKGLibs>
-    <VCPKGLibs Condition="'\(Configuration)'!='Debug'">$libs_release</VCPKGLibs>
+    <VCPKGLibs Condition="'\$(Configuration)'=='Debug'">$libs_debug</VCPKGLibs>
+    <VCPKGLibs Condition="'\$(Configuration)'!='Debug'">$libs_release</VCPKGLibs>
   </PropertyGroup>
   <Import Project="\$(VCTargetsPath)\\Microsoft.Cpp.Default.props" />
   <PropertyGroup Condition="'\$(Configuration)'=='Debug'" Label="Configuration">
@@ -277,6 +278,9 @@
     if ($target eq 'git') {
       print F "  <Import Project=\"LinkOrCopyBuiltins.targets\" />\n";
     }
+    if ($target eq 'git-remote-http') {
+      print F "  <Import Project=\"LinkOrCopyRemoteHttp.targets\" />\n";
+    }
     print F << "EOM";
 </Project>
 EOM
diff --git a/contrib/coccinelle/hashmap.cocci b/contrib/coccinelle/hashmap.cocci
new file mode 100644
index 0000000..d69e120
--- /dev/null
+++ b/contrib/coccinelle/hashmap.cocci
@@ -0,0 +1,16 @@
+@ hashmap_entry_init_usage @
+expression E;
+struct hashmap_entry HME;
+@@
+- HME.hash = E;
++ hashmap_entry_init(&HME, E);
+
+@@
+identifier f !~ "^hashmap_entry_init$";
+expression E;
+struct hashmap_entry *HMEP;
+@@
+  f(...) {<...
+- HMEP->hash = E;
++ hashmap_entry_init(HMEP, E);
+  ...>}
diff --git a/convert.c b/convert.c
index deb6f71..25ac525 100644
--- a/convert.c
+++ b/convert.c
@@ -290,8 +290,8 @@
 			const char *stripped = NULL;
 			char *upper = xstrdup_toupper(enc);
 			upper[strlen(upper)-2] = '\0';
-			if (!skip_prefix(upper, "UTF-", &stripped))
-				skip_prefix(stripped, "UTF", &stripped);
+			if (skip_prefix(upper, "UTF", &stripped))
+				skip_prefix(stripped, "-", &stripped);
 			advise(advise_msg, path, stripped);
 			free(upper);
 			if (die_on_error)
@@ -310,8 +310,8 @@
 				"working-tree-encoding.");
 			const char *stripped = NULL;
 			char *upper = xstrdup_toupper(enc);
-			if (!skip_prefix(upper, "UTF-", &stripped))
-				skip_prefix(stripped, "UTF", &stripped);
+			if (skip_prefix(upper, "UTF", &stripped))
+				skip_prefix(stripped, "-", &stripped);
 			advise(advise_msg, path, stripped, stripped);
 			free(upper);
 			if (die_on_error)
diff --git a/diff.c b/diff.c
index 6db6927..afe4400 100644
--- a/diff.c
+++ b/diff.c
@@ -933,16 +933,18 @@
 }
 
 static int moved_entry_cmp(const void *hashmap_cmp_fn_data,
-			   const void *entry,
-			   const void *entry_or_key,
+			   const struct hashmap_entry *eptr,
+			   const struct hashmap_entry *entry_or_key,
 			   const void *keydata)
 {
 	const struct diff_options *diffopt = hashmap_cmp_fn_data;
-	const struct moved_entry *a = entry;
-	const struct moved_entry *b = entry_or_key;
+	const struct moved_entry *a, *b;
 	unsigned flags = diffopt->color_moved_ws_handling
 			 & XDF_WHITESPACE_FLAGS;
 
+	a = container_of(eptr, const struct moved_entry, ent);
+	b = container_of(entry_or_key, const struct moved_entry, ent);
+
 	if (diffopt->color_moved_ws_handling &
 	    COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE)
 		/*
@@ -964,8 +966,9 @@
 	struct moved_entry *ret = xmalloc(sizeof(*ret));
 	struct emitted_diff_symbol *l = &o->emitted_symbols->buf[line_no];
 	unsigned flags = o->color_moved_ws_handling & XDF_WHITESPACE_FLAGS;
+	unsigned int hash = xdiff_hash_string(l->line, l->len, flags);
 
-	ret->ent.hash = xdiff_hash_string(l->line, l->len, flags);
+	hashmap_entry_init(&ret->ent, hash);
 	ret->es = l;
 	ret->next_line = NULL;
 
@@ -1002,7 +1005,7 @@
 		if (prev_line && prev_line->es->s == o->emitted_symbols->buf[n].s)
 			prev_line->next_line = key;
 
-		hashmap_add(hm, key);
+		hashmap_add(hm, &key->ent);
 		prev_line = key;
 	}
 }
@@ -1018,7 +1021,7 @@
 		struct moved_entry *prev = pmb[i].match;
 		struct moved_entry *cur = (prev && prev->next_line) ?
 				prev->next_line : NULL;
-		if (cur && !hm->cmpfn(o, cur, match, NULL)) {
+		if (cur && !hm->cmpfn(o, &cur->ent, &match->ent, NULL)) {
 			pmb[i].match = cur;
 		} else {
 			pmb[i].match = NULL;
@@ -1035,7 +1038,7 @@
 	int i;
 	char *got_match = xcalloc(1, pmb_nr);
 
-	for (; match; match = hashmap_get_next(hm, match)) {
+	hashmap_for_each_entry_from(hm, match, ent) {
 		for (i = 0; i < pmb_nr; i++) {
 			struct moved_entry *prev = pmb[i].match;
 			struct moved_entry *cur = (prev && prev->next_line) ?
@@ -1143,13 +1146,13 @@
 		case DIFF_SYMBOL_PLUS:
 			hm = del_lines;
 			key = prepare_entry(o, n);
-			match = hashmap_get(hm, key, NULL);
+			match = hashmap_get_entry(hm, key, ent, NULL);
 			free(key);
 			break;
 		case DIFF_SYMBOL_MINUS:
 			hm = add_lines;
 			key = prepare_entry(o, n);
-			match = hashmap_get(hm, key, NULL);
+			match = hashmap_get_entry(hm, key, ent, NULL);
 			free(key);
 			break;
 		default:
@@ -1188,7 +1191,7 @@
 			 * The current line is the start of a new block.
 			 * Setup the set of potential blocks.
 			 */
-			for (; match; match = hashmap_get_next(hm, match)) {
+			hashmap_for_each_entry_from(hm, match, ent) {
 				ALLOC_GROW(pmb, pmb_nr + 1, pmb_alloc);
 				if (o->color_moved_ws_handling &
 				    COLOR_MOVED_WS_ALLOW_INDENTATION_CHANGE) {
@@ -5978,7 +5981,7 @@
 }
 
 struct patch_id_t {
-	git_SHA_CTX *ctx;
+	git_hash_ctx *ctx;
 	int patchlen;
 };
 
@@ -5995,16 +5998,16 @@
 	return dst - line;
 }
 
-void flush_one_hunk(struct object_id *result, git_SHA_CTX *ctx)
+void flush_one_hunk(struct object_id *result, git_hash_ctx *ctx)
 {
 	unsigned char hash[GIT_MAX_RAWSZ];
 	unsigned short carry = 0;
 	int i;
 
-	git_SHA1_Final(hash, ctx);
-	git_SHA1_Init(ctx);
+	the_hash_algo->final_fn(hash, ctx);
+	the_hash_algo->init_fn(ctx);
 	/* 20-byte sum, with carry */
-	for (i = 0; i < GIT_SHA1_RAWSZ; ++i) {
+	for (i = 0; i < the_hash_algo->rawsz; ++i) {
 		carry += result->hash[i] + hash[i];
 		result->hash[i] = carry;
 		carry >>= 8;
@@ -6018,21 +6021,21 @@
 
 	new_len = remove_space(line, len);
 
-	git_SHA1_Update(data->ctx, line, new_len);
+	the_hash_algo->update_fn(data->ctx, line, new_len);
 	data->patchlen += new_len;
 }
 
-static void patch_id_add_string(git_SHA_CTX *ctx, const char *str)
+static void patch_id_add_string(git_hash_ctx *ctx, const char *str)
 {
-	git_SHA1_Update(ctx, str, strlen(str));
+	the_hash_algo->update_fn(ctx, str, strlen(str));
 }
 
-static void patch_id_add_mode(git_SHA_CTX *ctx, unsigned mode)
+static void patch_id_add_mode(git_hash_ctx *ctx, unsigned mode)
 {
 	/* large enough for 2^32 in octal */
 	char buf[12];
 	int len = xsnprintf(buf, sizeof(buf), "%06o", mode);
-	git_SHA1_Update(ctx, buf, len);
+	the_hash_algo->update_fn(ctx, buf, len);
 }
 
 /* returns 0 upon success, and writes result into oid */
@@ -6040,10 +6043,10 @@
 {
 	struct diff_queue_struct *q = &diff_queued_diff;
 	int i;
-	git_SHA_CTX ctx;
+	git_hash_ctx ctx;
 	struct patch_id_t data;
 
-	git_SHA1_Init(&ctx);
+	the_hash_algo->init_fn(&ctx);
 	memset(&data, 0, sizeof(struct patch_id_t));
 	data.ctx = &ctx;
 	oidclr(oid);
@@ -6076,27 +6079,27 @@
 		len2 = remove_space(p->two->path, strlen(p->two->path));
 		patch_id_add_string(&ctx, "diff--git");
 		patch_id_add_string(&ctx, "a/");
-		git_SHA1_Update(&ctx, p->one->path, len1);
+		the_hash_algo->update_fn(&ctx, p->one->path, len1);
 		patch_id_add_string(&ctx, "b/");
-		git_SHA1_Update(&ctx, p->two->path, len2);
+		the_hash_algo->update_fn(&ctx, p->two->path, len2);
 
 		if (p->one->mode == 0) {
 			patch_id_add_string(&ctx, "newfilemode");
 			patch_id_add_mode(&ctx, p->two->mode);
 			patch_id_add_string(&ctx, "---/dev/null");
 			patch_id_add_string(&ctx, "+++b/");
-			git_SHA1_Update(&ctx, p->two->path, len2);
+			the_hash_algo->update_fn(&ctx, p->two->path, len2);
 		} else if (p->two->mode == 0) {
 			patch_id_add_string(&ctx, "deletedfilemode");
 			patch_id_add_mode(&ctx, p->one->mode);
 			patch_id_add_string(&ctx, "---a/");
-			git_SHA1_Update(&ctx, p->one->path, len1);
+			the_hash_algo->update_fn(&ctx, p->one->path, len1);
 			patch_id_add_string(&ctx, "+++/dev/null");
 		} else {
 			patch_id_add_string(&ctx, "---a/");
-			git_SHA1_Update(&ctx, p->one->path, len1);
+			the_hash_algo->update_fn(&ctx, p->one->path, len1);
 			patch_id_add_string(&ctx, "+++b/");
-			git_SHA1_Update(&ctx, p->two->path, len2);
+			the_hash_algo->update_fn(&ctx, p->two->path, len2);
 		}
 
 		if (diff_header_only)
@@ -6108,10 +6111,10 @@
 
 		if (diff_filespec_is_binary(options->repo, p->one) ||
 		    diff_filespec_is_binary(options->repo, p->two)) {
-			git_SHA1_Update(&ctx, oid_to_hex(&p->one->oid),
-					GIT_SHA1_HEXSZ);
-			git_SHA1_Update(&ctx, oid_to_hex(&p->two->oid),
-					GIT_SHA1_HEXSZ);
+			the_hash_algo->update_fn(&ctx, oid_to_hex(&p->one->oid),
+					the_hash_algo->hexsz);
+			the_hash_algo->update_fn(&ctx, oid_to_hex(&p->two->oid),
+					the_hash_algo->hexsz);
 			continue;
 		}
 
@@ -6128,7 +6131,7 @@
 	}
 
 	if (!stable)
-		git_SHA1_Final(oid->hash, &ctx);
+		the_hash_algo->final_fn(oid->hash, &ctx);
 
 	return 0;
 }
@@ -6230,8 +6233,10 @@
 			if (o->color_moved == COLOR_MOVED_ZEBRA_DIM)
 				dim_moved_lines(o);
 
-			hashmap_free(&add_lines, 1);
-			hashmap_free(&del_lines, 1);
+			hashmap_free_entries(&add_lines, struct moved_entry,
+						ent);
+			hashmap_free_entries(&del_lines, struct moved_entry,
+						ent);
 		}
 
 		for (i = 0; i < esm.nr; i++)
diff --git a/diff.h b/diff.h
index c2c3056..7f8f024 100644
--- a/diff.h
+++ b/diff.h
@@ -438,7 +438,7 @@
 
 int do_diff_cache(const struct object_id *, struct diff_options *);
 int diff_flush_patch_id(struct diff_options *, struct object_id *, int, int);
-void flush_one_hunk(struct object_id *, git_SHA_CTX *);
+void flush_one_hunk(struct object_id *result, git_hash_ctx *ctx);
 
 int diff_result_code(struct diff_options *, int);
 
diff --git a/diffcore-break.c b/diffcore-break.c
index 875aefd..9d20a6a 100644
--- a/diffcore-break.c
+++ b/diffcore-break.c
@@ -286,17 +286,17 @@
 					/* Peer survived.  Merge them */
 					merge_broken(p, pp, &outq);
 					q->queue[j] = NULL;
-					break;
+					goto next;
 				}
 			}
-			if (q->nr <= j)
-				/* The peer did not survive, so we keep
-				 * it in the output.
-				 */
-				diff_q(&outq, p);
+			/* The peer did not survive, so we keep
+			 * it in the output.
+			 */
+			diff_q(&outq, p);
 		}
 		else
 			diff_q(&outq, p);
+next:;
 	}
 	free(q->queue);
 	*q = outq;
diff --git a/diffcore-rename.c b/diffcore-rename.c
index 9624864..531d7ad 100644
--- a/diffcore-rename.c
+++ b/diffcore-rename.c
@@ -274,18 +274,17 @@
 				struct diff_options *options)
 {
 	int renames = 0;
-
 	struct diff_filespec *target = rename_dst[dst_index].two;
 	struct file_similarity *p, *best = NULL;
 	int i = 100, best_score = -1;
+	unsigned int hash = hash_filespec(options->repo, target);
 
 	/*
 	 * Find the best source match for specified destination.
 	 */
-	p = hashmap_get_from_hash(srcs,
-				  hash_filespec(options->repo, target),
-				  NULL);
-	for (; p; p = hashmap_get_next(srcs, p)) {
+	p = hashmap_get_entry_from_hash(srcs, hash, NULL,
+					struct file_similarity, entry);
+	hashmap_for_each_entry_from(srcs, p, entry) {
 		int score;
 		struct diff_filespec *source = p->filespec;
 
@@ -329,8 +328,8 @@
 	entry->index = index;
 	entry->filespec = filespec;
 
-	hashmap_entry_init(entry, hash_filespec(r, filespec));
-	hashmap_add(table, entry);
+	hashmap_entry_init(&entry->entry, hash_filespec(r, filespec));
+	hashmap_add(table, &entry->entry);
 }
 
 /*
@@ -359,7 +358,7 @@
 		renames += find_identical_files(&file_table, i, options);
 
 	/* Free the hash data structure and entries */
-	hashmap_free(&file_table, 1);
+	hashmap_free_entries(&file_table, struct file_similarity, entry);
 
 	return renames;
 }
@@ -585,7 +584,7 @@
 	stop_progress(&progress);
 
 	/* cost matrix sorted by most to least similar pair */
-	QSORT(mx, dst_cnt * NUM_CANDIDATE_PER_DST, score_compare);
+	STABLE_QSORT(mx, dst_cnt * NUM_CANDIDATE_PER_DST, score_compare);
 
 	rename_count += find_renames(mx, dst_cnt, minimum_score, 0);
 	if (detect_rename == DIFF_DETECT_COPY)
diff --git a/dir.c b/dir.c
index cab9c2a..61f559f9 100644
--- a/dir.c
+++ b/dir.c
@@ -139,7 +139,7 @@
 	 * ":(icase)path" is treated as a pathspec full of
 	 * wildcard. In other words, only prefix is considered common
 	 * prefix. If the pathspec is abc/foo abc/bar, running in
-	 * subdir xyz, the common prefix is still xyz, not xuz/abc as
+	 * subdir xyz, the common prefix is still xyz, not xyz/abc as
 	 * in non-:(icase).
 	 */
 	GUARD_PATHSPEC(pathspec,
@@ -273,19 +273,30 @@
 
 #define DO_MATCH_EXCLUDE   (1<<0)
 #define DO_MATCH_DIRECTORY (1<<1)
-#define DO_MATCH_SUBMODULE (1<<2)
+#define DO_MATCH_LEADING_PATHSPEC (1<<2)
 
 /*
- * Does 'match' match the given name?
- * A match is found if
+ * Does the given pathspec match the given name?  A match is found if
  *
- * (1) the 'match' string is leading directory of 'name', or
- * (2) the 'match' string is a wildcard and matches 'name', or
- * (3) the 'match' string is exactly the same as 'name'.
+ * (1) the pathspec string is leading directory of 'name' ("RECURSIVELY"), or
+ * (2) the pathspec string has a leading part matching 'name' ("LEADING"), or
+ * (3) the pathspec string is a wildcard and matches 'name' ("WILDCARD"), or
+ * (4) the pathspec string is exactly the same as 'name' ("EXACT").
  *
- * and the return value tells which case it was.
+ * Return value tells which case it was (1-4), or 0 when there is no match.
  *
- * It returns 0 when there is no match.
+ * It may be instructive to look at a small table of concrete examples
+ * to understand the differences between 1, 2, and 4:
+ *
+ *                              Pathspecs
+ *                |    a/b    |   a/b/    |   a/b/c
+ *          ------+-----------+-----------+------------
+ *          a/b   |  EXACT    |  EXACT[1] | LEADING[2]
+ *  Names   a/b/  | RECURSIVE |   EXACT   | LEADING[2]
+ *          a/b/c | RECURSIVE | RECURSIVE |   EXACT
+ *
+ * [1] Only if DO_MATCH_DIRECTORY is passed; otherwise, this is NOT a match.
+ * [2] Only if DO_MATCH_LEADING_PATHSPEC is passed; otherwise, not a match.
  */
 static int match_pathspec_item(const struct index_state *istate,
 			       const struct pathspec_item *item, int prefix,
@@ -353,13 +364,14 @@
 			 item->nowildcard_len - prefix))
 		return MATCHED_FNMATCH;
 
-	/* Perform checks to see if "name" is a super set of the pathspec */
-	if (flags & DO_MATCH_SUBMODULE) {
+	/* Perform checks to see if "name" is a leading string of the pathspec */
+	if (flags & DO_MATCH_LEADING_PATHSPEC) {
 		/* name is a literal prefix of the pathspec */
+		int offset = name[namelen-1] == '/' ? 1 : 0;
 		if ((namelen < matchlen) &&
-		    (match[namelen] == '/') &&
+		    (match[namelen-offset] == '/') &&
 		    !ps_strncmp(item, match, name, namelen))
-			return MATCHED_RECURSIVELY;
+			return MATCHED_RECURSIVELY_LEADING_PATHSPEC;
 
 		/* name" doesn't match up to the first wild character */
 		if (item->nowildcard_len < item->len &&
@@ -376,7 +388,7 @@
 		 * The submodules themselves will be able to perform more
 		 * accurate matching to determine if the pathspec matches.
 		 */
-		return MATCHED_RECURSIVELY;
+		return MATCHED_RECURSIVELY_LEADING_PATHSPEC;
 	}
 
 	return 0;
@@ -497,7 +509,7 @@
 					strlen(submodule_name),
 					0, seen,
 					DO_MATCH_DIRECTORY |
-					DO_MATCH_SUBMODULE);
+					DO_MATCH_LEADING_PATHSPEC);
 	return matched;
 }
 
@@ -1451,6 +1463,16 @@
 		return path_none;
 
 	case index_nonexistent:
+		if (dir->flags & DIR_SKIP_NESTED_GIT) {
+			int nested_repo;
+			struct strbuf sb = STRBUF_INIT;
+			strbuf_addstr(&sb, dirname);
+			nested_repo = is_nonbare_repository_dir(&sb);
+			strbuf_release(&sb);
+			if (nested_repo)
+				return path_none;
+		}
+
 		if (dir->flags & DIR_SHOW_OTHER_DIRECTORIES)
 			break;
 		if (exclude &&
@@ -1950,8 +1972,11 @@
 		/* recurse into subdir if instructed by treat_path */
 		if ((state == path_recurse) ||
 			((state == path_untracked) &&
-			 (dir->flags & DIR_SHOW_IGNORED_TOO) &&
-			 (get_dtype(cdir.de, istate, path.buf, path.len) == DT_DIR))) {
+			 (get_dtype(cdir.de, istate, path.buf, path.len) == DT_DIR) &&
+			 ((dir->flags & DIR_SHOW_IGNORED_TOO) ||
+			  (pathspec &&
+			   do_match_pathspec(istate, pathspec, path.buf, path.len,
+					     baselen, NULL, DO_MATCH_LEADING_PATHSPEC) == MATCHED_RECURSIVELY_LEADING_PATHSPEC)))) {
 			struct untracked_cache_dir *ud;
 			ud = lookup_untracked(dir->untracked, untracked,
 					      path.buf + baselen,
@@ -1962,6 +1987,12 @@
 							 check_only, stop_at_first_file, pathspec);
 			if (subdir_state > dir_state)
 				dir_state = subdir_state;
+
+			if (pathspec &&
+			    !match_pathspec(istate, pathspec, path.buf, path.len,
+					    0 /* prefix */, NULL,
+					    0 /* do NOT special case dirs */))
+				state = path_none;
 		}
 
 		if (check_only) {
diff --git a/dir.h b/dir.h
index 608696c..2fbdef0 100644
--- a/dir.h
+++ b/dir.h
@@ -156,7 +156,8 @@
 		DIR_SHOW_IGNORED_TOO = 1<<5,
 		DIR_COLLECT_KILLED_ONLY = 1<<6,
 		DIR_KEEP_UNTRACKED_CONTENTS = 1<<7,
-		DIR_SHOW_IGNORED_TOO_MODE_MATCHING = 1<<8
+		DIR_SHOW_IGNORED_TOO_MODE_MATCHING = 1<<8,
+		DIR_SKIP_NESTED_GIT = 1<<9
 	} flags;
 	struct dir_entry **entries;
 	struct dir_entry **ignored;
@@ -211,8 +212,9 @@
  * when populating the seen[] array.
  */
 #define MATCHED_RECURSIVELY 1
-#define MATCHED_FNMATCH 2
-#define MATCHED_EXACTLY 3
+#define MATCHED_RECURSIVELY_LEADING_PATHSPEC 2
+#define MATCHED_FNMATCH 3
+#define MATCHED_EXACTLY 4
 int simple_length(const char *match);
 int no_wildcard(const char *string);
 char *common_prefix(const struct pathspec *pathspec);
diff --git a/fast-import.c b/fast-import.c
index 1f9160b..9503d08 100644
--- a/fast-import.c
+++ b/fast-import.c
@@ -2489,18 +2489,14 @@
 	}
 }
 
-static int parse_from(struct branch *b)
+static int parse_objectish(struct branch *b, const char *objectish)
 {
-	const char *from;
 	struct branch *s;
 	struct object_id oid;
 
-	if (!skip_prefix(command_buf.buf, "from ", &from))
-		return 0;
-
 	oidcpy(&oid, &b->branch_tree.versions[1].oid);
 
-	s = lookup_branch(from);
+	s = lookup_branch(objectish);
 	if (b == s)
 		die("Can't create a branch from itself: %s", b->name);
 	else if (s) {
@@ -2508,8 +2504,8 @@
 		oidcpy(&b->oid, &s->oid);
 		oidcpy(&b->branch_tree.versions[0].oid, t);
 		oidcpy(&b->branch_tree.versions[1].oid, t);
-	} else if (*from == ':') {
-		uintmax_t idnum = parse_mark_ref_eol(from);
+	} else if (*objectish == ':') {
+		uintmax_t idnum = parse_mark_ref_eol(objectish);
 		struct object_entry *oe = find_mark(idnum);
 		if (oe->type != OBJ_COMMIT)
 			die("Mark :%" PRIuMAX " not a commit", idnum);
@@ -2523,13 +2519,13 @@
 			} else
 				parse_from_existing(b);
 		}
-	} else if (!get_oid(from, &b->oid)) {
+	} else if (!get_oid(objectish, &b->oid)) {
 		parse_from_existing(b);
 		if (is_null_oid(&b->oid))
 			b->delete = 1;
 	}
 	else
-		die("Invalid ref name or SHA1 expression: %s", from);
+		die("Invalid ref name or SHA1 expression: %s", objectish);
 
 	if (b->branch_tree.tree && !oideq(&oid, &b->branch_tree.versions[1].oid)) {
 		release_tree_content_recursive(b->branch_tree.tree);
@@ -2540,6 +2536,26 @@
 	return 1;
 }
 
+static int parse_from(struct branch *b)
+{
+	const char *from;
+
+	if (!skip_prefix(command_buf.buf, "from ", &from))
+		return 0;
+
+	return parse_objectish(b, from);
+}
+
+static int parse_objectish_with_prefix(struct branch *b, const char *prefix)
+{
+	const char *base;
+
+	if (!skip_prefix(command_buf.buf, prefix, &base))
+		return 0;
+
+	return parse_objectish(b, base);
+}
+
 static struct hash_list *parse_merge(unsigned int *count)
 {
 	struct hash_list *list = NULL, **tail = &list, *n;
@@ -2714,6 +2730,7 @@
 		first_tag = t;
 	last_tag = t;
 	read_next_command();
+	parse_mark();
 
 	/* from ... */
 	if (!skip_prefix(command_buf.buf, "from ", &from))
@@ -2770,7 +2787,7 @@
 	strbuf_addbuf(&new_data, &msg);
 	free(tagger);
 
-	if (store_object(OBJ_TAG, &new_data, NULL, &t->oid, 0))
+	if (store_object(OBJ_TAG, &new_data, NULL, &t->oid, next_mark))
 		t->pack_id = MAX_PACK_ID;
 	else
 		t->pack_id = pack_id;
@@ -2779,6 +2796,7 @@
 static void parse_reset_branch(const char *arg)
 {
 	struct branch *b;
+	const char *tag_name;
 
 	b = lookup_branch(arg);
 	if (b) {
@@ -2794,6 +2812,32 @@
 		b = new_branch(arg);
 	read_next_command();
 	parse_from(b);
+	if (b->delete && skip_prefix(b->name, "refs/tags/", &tag_name)) {
+		/*
+		 * Elsewhere, we call dump_branches() before dump_tags(),
+		 * and dump_branches() will handle ref deletions first, so
+		 * in order to make sure the deletion actually takes effect,
+		 * we need to remove the tag from our list of tags to update.
+		 *
+		 * NEEDSWORK: replace list of tags with hashmap for faster
+		 * deletion?
+		 */
+		struct tag *t, *prev = NULL;
+		for (t = first_tag; t; t = t->next_tag) {
+			if (!strcmp(t->name, tag_name))
+				break;
+			prev = t;
+		}
+		if (t) {
+			if (prev)
+				prev->next_tag = t->next_tag;
+			else
+				first_tag = t->next_tag;
+			if (!t->next_tag)
+				last_tag = prev;
+			/* There is no mem_pool_free(t) function to call. */
+		}
+	}
 	if (command_buf.len > 0)
 		unread_command_buf = 1;
 }
@@ -3060,6 +3104,28 @@
 	skip_optional_lf();
 }
 
+static void parse_alias(void)
+{
+	struct object_entry *e;
+	struct branch b;
+
+	skip_optional_lf();
+	read_next_command();
+
+	/* mark ... */
+	parse_mark();
+	if (!next_mark)
+		die(_("Expected 'mark' command, got %s"), command_buf.buf);
+
+	/* to ... */
+	memset(&b, 0, sizeof(b));
+	if (!parse_objectish_with_prefix(&b, "to "))
+		die(_("Expected 'to' command, got %s"), command_buf.buf);
+	e = find_object(&b.oid);
+	assert(e);
+	insert_mark(next_mark, e);
+}
+
 static char* make_fast_import_path(const char *path)
 {
 	if (!relative_marks_paths || is_absolute_path(path))
@@ -3187,6 +3253,8 @@
 		option_import_marks(arg, from_stream, 1);
 	} else if (skip_prefix(feature, "export-marks=", &arg)) {
 		option_export_marks(arg);
+	} else if (!strcmp(feature, "alias")) {
+		; /* Don't die - this feature is supported */
 	} else if (!strcmp(feature, "get-mark")) {
 		; /* Don't die - this feature is supported */
 	} else if (!strcmp(feature, "cat-blob")) {
@@ -3343,6 +3411,8 @@
 			parse_checkpoint();
 		else if (!strcmp("done", command_buf.buf))
 			break;
+		else if (!strcmp("alias", command_buf.buf))
+			parse_alias();
 		else if (starts_with(command_buf.buf, "progress "))
 			parse_progress();
 		else if (skip_prefix(command_buf.buf, "feature ", &v))
diff --git a/fetch-pack.c b/fetch-pack.c
index 6ccc629..947da54 100644
--- a/fetch-pack.c
+++ b/fetch-pack.c
@@ -167,16 +167,16 @@
 	if (!strcmp(reader->line, "NAK"))
 		return NAK;
 	if (skip_prefix(reader->line, "ACK ", &arg)) {
-		if (!get_oid_hex(arg, result_oid)) {
-			arg += 40;
-			len -= arg - reader->line;
+		const char *p;
+		if (!parse_oid_hex(arg, result_oid, &p)) {
+			len -= p - reader->line;
 			if (len < 1)
 				return ACK;
-			if (strstr(arg, "continue"))
+			if (strstr(p, "continue"))
 				return ACK_continue;
-			if (strstr(arg, "common"))
+			if (strstr(p, "common"))
 				return ACK_common;
-			if (strstr(arg, "ready"))
+			if (strstr(p, "ready"))
 				return ACK_ready;
 			return ACK;
 		}
diff --git a/git-add--interactive.perl b/git-add--interactive.perl
index c20ae9e..52659bb 100755
--- a/git-add--interactive.perl
+++ b/git-add--interactive.perl
@@ -1541,7 +1541,7 @@
 		for (@{$hunk[$ix]{DISPLAY}}) {
 			print;
 		}
-		print colored $prompt_color,
+		print colored $prompt_color, "(", ($ix+1), "/$num) ",
 			sprintf(__($patch_update_prompt_modes{$patch_mode}{$hunk[$ix]{TYPE}}), $other);
 
 		my $line = prompt_single_character;
diff --git a/git-compat-util.h b/git-compat-util.h
index f0d13e4..8b8b29a 100644
--- a/git-compat-util.h
+++ b/git-compat-util.h
@@ -344,6 +344,7 @@
 #define PRItime PRIuMAX
 #define parse_timestamp strtoumax
 #define TIME_MAX UINTMAX_MAX
+#define TIME_MIN 0
 
 #ifndef PATH_SEP
 #define PATH_SEP ':'
@@ -1091,10 +1092,10 @@
 	return 0;
 }
 
+void git_stable_qsort(void *base, size_t nmemb, size_t size,
+		      int(*compar)(const void *, const void *));
 #ifdef INTERNAL_QSORT
-void git_qsort(void *base, size_t nmemb, size_t size,
-	       int(*compar)(const void *, const void *));
-#define qsort git_qsort
+#define qsort git_stable_qsort
 #endif
 
 #define QSORT(base, n, compar) sane_qsort((base), (n), sizeof(*(base)), compar)
@@ -1105,6 +1106,9 @@
 		qsort(base, nmemb, size, compar);
 }
 
+#define STABLE_QSORT(base, n, compar) \
+	git_stable_qsort((base), (n), sizeof(*(base)), compar)
+
 #ifndef HAVE_ISO_QSORT_S
 int git_qsort_s(void *base, size_t nmemb, size_t size,
 		int (*compar)(const void *, const void *, void *), void *ctx);
@@ -1309,4 +1313,42 @@
  */
 #include "banned.h"
 
+/*
+ * container_of - Get the address of an object containing a field.
+ *
+ * @ptr: pointer to the field.
+ * @type: type of the object.
+ * @member: name of the field within the object.
+ */
+#define container_of(ptr, type, member) \
+	((type *) ((char *)(ptr) - offsetof(type, member)))
+
+/*
+ * helper function for `container_of_or_null' to avoid multiple
+ * evaluation of @ptr
+ */
+static inline void *container_of_or_null_offset(void *ptr, size_t offset)
+{
+	return ptr ? (char *)ptr - offset : NULL;
+}
+
+/*
+ * like `container_of', but allows returned value to be NULL
+ */
+#define container_of_or_null(ptr, type, member) \
+	(type *)container_of_or_null_offset(ptr, offsetof(type, member))
+
+/*
+ * like offsetof(), but takes a pointer to a a variable of type which
+ * contains @member, instead of a specified type.
+ * @ptr is subject to multiple evaluation since we can't rely on __typeof__
+ * everywhere.
+ */
+#if defined(__GNUC__) /* clang sets this, too */
+#define OFFSETOF_VAR(ptr, member) offsetof(__typeof__(*ptr), member)
+#else /* !__GNUC__ */
+#define OFFSETOF_VAR(ptr, member) \
+	((uintptr_t)&(ptr)->member - (uintptr_t)(ptr))
+#endif /* !__GNUC__ */
+
 #endif
diff --git a/git-p4.py b/git-p4.py
index 3991e7d..60c73b6 100755
--- a/git-p4.py
+++ b/git-p4.py
@@ -1160,13 +1160,11 @@
             if contentsSize <= gitConfigInt('git-p4.largeFileCompressedThreshold'):
                 return False
             contentTempFile = self.generateTempFile(contents)
-            compressedContentFile = tempfile.NamedTemporaryFile(prefix='git-p4-large-file', delete=False)
-            zf = zipfile.ZipFile(compressedContentFile.name, mode='w')
-            zf.write(contentTempFile, compress_type=zipfile.ZIP_DEFLATED)
-            zf.close()
-            compressedContentsSize = zf.infolist()[0].compress_size
+            compressedContentFile = tempfile.NamedTemporaryFile(prefix='git-p4-large-file', delete=True)
+            with zipfile.ZipFile(compressedContentFile, mode='w') as zf:
+                zf.write(contentTempFile, compress_type=zipfile.ZIP_DEFLATED)
+                compressedContentsSize = zf.infolist()[0].compress_size
             os.remove(contentTempFile)
-            os.remove(compressedContentFile.name)
             if compressedContentsSize > gitConfigInt('git-p4.largeFileCompressedThreshold'):
                 return True
         return False
@@ -3525,8 +3523,9 @@
         self.updateOptionDict(details)
         try:
             self.commit(details, self.extractFilesFromCommit(details), self.branch)
-        except IOError:
+        except IOError as err:
             print("IO error with git fast-import. Is your git version recent enough?")
+            print("IO error details: {}".format(err))
             print(self.gitError.read())
 
     def openStreams(self):
diff --git a/grep.c b/grep.c
index 9dd2908..0bb4cbd 100644
--- a/grep.c
+++ b/grep.c
@@ -368,31 +368,20 @@
 	return 1;
 }
 
-static int has_null(const char *s, size_t len)
-{
-	/*
-	 * regcomp cannot accept patterns with NULs so when using it
-	 * we consider any pattern containing a NUL fixed.
-	 */
-	if (memchr(s, 0, len))
-		return 1;
-
-	return 0;
-}
-
 #ifdef USE_LIBPCRE1
 static void compile_pcre1_regexp(struct grep_pat *p, const struct grep_opt *opt)
 {
 	const char *error;
 	int erroffset;
 	int options = PCRE_MULTILINE;
+	int study_options = 0;
 
 	if (opt->ignore_case) {
-		if (has_non_ascii(p->pattern))
+		if (!opt->ignore_locale && has_non_ascii(p->pattern))
 			p->pcre1_tables = pcre_maketables();
 		options |= PCRE_CASELESS;
 	}
-	if (is_utf8_locale() && has_non_ascii(p->pattern))
+	if (!opt->ignore_locale && is_utf8_locale() && has_non_ascii(p->pattern))
 		options |= PCRE_UTF8;
 
 	p->pcre1_regexp = pcre_compile(p->pattern, options, &error, &erroffset,
@@ -400,22 +389,18 @@
 	if (!p->pcre1_regexp)
 		compile_regexp_failed(p, error);
 
-	p->pcre1_extra_info = pcre_study(p->pcre1_regexp, GIT_PCRE_STUDY_JIT_COMPILE, &error);
+#if defined(PCRE_CONFIG_JIT) && !defined(NO_LIBPCRE1_JIT)
+	pcre_config(PCRE_CONFIG_JIT, &p->pcre1_jit_on);
+	if (opt->debug)
+		fprintf(stderr, "pcre1_jit_on=%d\n", p->pcre1_jit_on);
+
+	if (p->pcre1_jit_on)
+		study_options = PCRE_STUDY_JIT_COMPILE;
+#endif
+
+	p->pcre1_extra_info = pcre_study(p->pcre1_regexp, study_options, &error);
 	if (!p->pcre1_extra_info && error)
 		die("%s", error);
-
-#ifdef GIT_PCRE1_USE_JIT
-	pcre_config(PCRE_CONFIG_JIT, &p->pcre1_jit_on);
-	if (p->pcre1_jit_on == 1) {
-		p->pcre1_jit_stack = pcre_jit_stack_alloc(1, 1024 * 1024);
-		if (!p->pcre1_jit_stack)
-			die("Couldn't allocate PCRE JIT stack");
-		pcre_assign_jit_stack(p->pcre1_extra_info, NULL, p->pcre1_jit_stack);
-	} else if (p->pcre1_jit_on != 0) {
-		BUG("The pcre1_jit_on variable should be 0 or 1, not %d",
-		    p->pcre1_jit_on);
-	}
-#endif
 }
 
 static int pcre1match(struct grep_pat *p, const char *line, const char *eol,
@@ -426,18 +411,9 @@
 	if (eflags & REG_NOTBOL)
 		flags |= PCRE_NOTBOL;
 
-#ifdef GIT_PCRE1_USE_JIT
-	if (p->pcre1_jit_on) {
-		ret = pcre_jit_exec(p->pcre1_regexp, p->pcre1_extra_info, line,
-				    eol - line, 0, flags, ovector,
-				    ARRAY_SIZE(ovector), p->pcre1_jit_stack);
-	} else
-#endif
-	{
-		ret = pcre_exec(p->pcre1_regexp, p->pcre1_extra_info, line,
-				eol - line, 0, flags, ovector,
-				ARRAY_SIZE(ovector));
-	}
+	ret = pcre_exec(p->pcre1_regexp, p->pcre1_extra_info, line,
+			eol - line, 0, flags, ovector,
+			ARRAY_SIZE(ovector));
 
 	if (ret < 0 && ret != PCRE_ERROR_NOMATCH)
 		die("pcre_exec failed with error code %d", ret);
@@ -453,15 +429,12 @@
 static void free_pcre1_regexp(struct grep_pat *p)
 {
 	pcre_free(p->pcre1_regexp);
-#ifdef GIT_PCRE1_USE_JIT
-	if (p->pcre1_jit_on) {
+#ifdef PCRE_CONFIG_JIT
+	if (p->pcre1_jit_on)
 		pcre_free_study(p->pcre1_extra_info);
-		pcre_jit_stack_free(p->pcre1_jit_stack);
-	} else
+	else
 #endif
-	{
 		pcre_free(p->pcre1_extra_info);
-	}
 	pcre_free((void *)p->pcre1_tables);
 }
 #else /* !USE_LIBPCRE1 */
@@ -498,14 +471,15 @@
 	p->pcre2_compile_context = NULL;
 
 	if (opt->ignore_case) {
-		if (has_non_ascii(p->pattern)) {
+		if (!opt->ignore_locale && has_non_ascii(p->pattern)) {
 			character_tables = pcre2_maketables(NULL);
 			p->pcre2_compile_context = pcre2_compile_context_create(NULL);
 			pcre2_set_character_tables(p->pcre2_compile_context, character_tables);
 		}
 		options |= PCRE2_CASELESS;
 	}
-	if (is_utf8_locale() && has_non_ascii(p->pattern))
+	if (!opt->ignore_locale && is_utf8_locale() && has_non_ascii(p->pattern) &&
+	    !(!opt->ignore_case && (p->fixed || p->is_fixed)))
 		options |= PCRE2_UTF;
 
 	p->pcre2_pattern = pcre2_compile((PCRE2_SPTR)p->pattern,
@@ -522,7 +496,9 @@
 	}
 
 	pcre2_config(PCRE2_CONFIG_JIT, &p->pcre2_jit_on);
-	if (p->pcre2_jit_on == 1) {
+	if (opt->debug)
+		fprintf(stderr, "pcre2_jit_on=%d\n", p->pcre2_jit_on);
+	if (p->pcre2_jit_on) {
 		jitret = pcre2_jit_compile(p->pcre2_pattern, PCRE2_JIT_COMPLETE);
 		if (jitret)
 			die("Couldn't JIT the PCRE2 pattern '%s', got '%d'\n", p->pattern, jitret);
@@ -547,19 +523,11 @@
 			BUG("pcre2_pattern_info() failed: %d", patinforet);
 		if (jitsizearg == 0) {
 			p->pcre2_jit_on = 0;
+			if (opt->debug)
+				fprintf(stderr, "pcre2_jit_on=%d: (*NO_JIT) in regex\n",
+					p->pcre2_jit_on);
 			return;
 		}
-
-		p->pcre2_jit_stack = pcre2_jit_stack_create(1, 1024 * 1024, NULL);
-		if (!p->pcre2_jit_stack)
-			die("Couldn't allocate PCRE2 JIT stack");
-		p->pcre2_match_context = pcre2_match_context_create(NULL);
-		if (!p->pcre2_match_context)
-			die("Couldn't allocate PCRE2 match context");
-		pcre2_jit_stack_assign(p->pcre2_match_context, NULL, p->pcre2_jit_stack);
-	} else if (p->pcre2_jit_on != 0) {
-		BUG("The pcre2_jit_on variable should be 0 or 1, not %d",
-		    p->pcre2_jit_on);
 	}
 }
 
@@ -603,8 +571,6 @@
 	pcre2_compile_context_free(p->pcre2_compile_context);
 	pcre2_code_free(p->pcre2_pattern);
 	pcre2_match_data_free(p->pcre2_match_data);
-	pcre2_jit_stack_free(p->pcre2_jit_stack);
-	pcre2_match_context_free(p->pcre2_match_context);
 }
 #else /* !USE_LIBPCRE2 */
 static void compile_pcre2_pattern(struct grep_pat *p, const struct grep_opt *opt)
@@ -626,7 +592,6 @@
 static void free_pcre2_pattern(struct grep_pat *p)
 {
 }
-#endif /* !USE_LIBPCRE2 */
 
 static void compile_fixed_regexp(struct grep_pat *p, struct grep_opt *opt)
 {
@@ -647,46 +612,66 @@
 		compile_regexp_failed(p, errbuf);
 	}
 }
+#endif /* !USE_LIBPCRE2 */
 
 static void compile_regexp(struct grep_pat *p, struct grep_opt *opt)
 {
-	int ascii_only;
 	int err;
 	int regflags = REG_NEWLINE;
 
 	p->word_regexp = opt->word_regexp;
 	p->ignore_case = opt->ignore_case;
-	ascii_only     = !has_non_ascii(p->pattern);
+	p->fixed = opt->fixed;
 
-	/*
-	 * Even when -F (fixed) asks us to do a non-regexp search, we
-	 * may not be able to correctly case-fold when -i
-	 * (ignore-case) is asked (in which case, we'll synthesize a
-	 * regexp to match the pattern that matches regexp special
-	 * characters literally, while ignoring case differences).  On
-	 * the other hand, even without -F, if the pattern does not
-	 * have any regexp special characters and there is no need for
-	 * case-folding search, we can internally turn it into a
-	 * simple string match using kws.  p->fixed tells us if we
-	 * want to use kws.
-	 */
-	if (opt->fixed ||
-	    has_null(p->pattern, p->patternlen) ||
-	    is_fixed(p->pattern, p->patternlen))
-		p->fixed = !p->ignore_case || ascii_only;
+	if (memchr(p->pattern, 0, p->patternlen) && !opt->pcre2)
+		die(_("given pattern contains NULL byte (via -f <file>). This is only supported with -P under PCRE v2"));
 
-	if (p->fixed) {
-		p->kws = kwsalloc(p->ignore_case ? tolower_trans_tbl : NULL);
-		kwsincr(p->kws, p->pattern, p->patternlen);
-		kwsprep(p->kws);
-		return;
-	} else if (opt->fixed) {
-		/*
-		 * We come here when the pattern has the non-ascii
-		 * characters we cannot case-fold, and asked to
-		 * ignore-case.
-		 */
+	p->is_fixed = is_fixed(p->pattern, p->patternlen);
+#ifdef USE_LIBPCRE2
+       if (!p->fixed && !p->is_fixed) {
+	       const char *no_jit = "(*NO_JIT)";
+	       const int no_jit_len = strlen(no_jit);
+	       if (starts_with(p->pattern, no_jit) &&
+		   is_fixed(p->pattern + no_jit_len,
+			    p->patternlen - no_jit_len))
+		       p->is_fixed = 1;
+       }
+#endif
+	if (p->fixed || p->is_fixed) {
+#ifdef USE_LIBPCRE2
+		opt->pcre2 = 1;
+		if (p->is_fixed) {
+			compile_pcre2_pattern(p, opt);
+		} else {
+			/*
+			 * E.g. t7811-grep-open.sh relies on the
+			 * pattern being restored.
+			 */
+			char *old_pattern = p->pattern;
+			size_t old_patternlen = p->patternlen;
+			struct strbuf sb = STRBUF_INIT;
+
+			/*
+			 * There is the PCRE2_LITERAL flag, but it's
+			 * only in PCRE v2 10.30 and later. Needing to
+			 * ifdef our way around that and dealing with
+			 * it + PCRE2_MULTILINE being an error is more
+			 * complex than just quoting this ourselves.
+			*/
+			strbuf_add(&sb, "\\Q", 2);
+			strbuf_add(&sb, p->pattern, p->patternlen);
+			strbuf_add(&sb, "\\E", 2);
+
+			p->pattern = sb.buf;
+			p->patternlen = sb.len;
+			compile_pcre2_pattern(p, opt);
+			p->pattern = old_pattern;
+			p->patternlen = old_patternlen;
+			strbuf_release(&sb);
+		}
+#else /* !USE_LIBPCRE2 */
 		compile_fixed_regexp(p, opt);
+#endif /* !USE_LIBPCRE2 */
 		return;
 	}
 
@@ -1053,9 +1038,7 @@
 		case GREP_PATTERN: /* atom */
 		case GREP_PATTERN_HEAD:
 		case GREP_PATTERN_BODY:
-			if (p->kws)
-				kwsfree(p->kws);
-			else if (p->pcre1_regexp)
+			if (p->pcre1_regexp)
 				free_pcre1_regexp(p);
 			else if (p->pcre2_pattern)
 				free_pcre2_pattern(p);
@@ -1115,29 +1098,12 @@
 	opt->output(opt, opt->null_following_name ? "\0" : "\n", 1);
 }
 
-static int fixmatch(struct grep_pat *p, char *line, char *eol,
-		    regmatch_t *match)
-{
-	struct kwsmatch kwsm;
-	size_t offset = kwsexec(p->kws, line, eol - line, &kwsm);
-	if (offset == -1) {
-		match->rm_so = match->rm_eo = -1;
-		return REG_NOMATCH;
-	} else {
-		match->rm_so = offset;
-		match->rm_eo = match->rm_so + kwsm.size[0];
-		return 0;
-	}
-}
-
 static int patmatch(struct grep_pat *p, char *line, char *eol,
 		    regmatch_t *match, int eflags)
 {
 	int hit;
 
-	if (p->fixed)
-		hit = !fixmatch(p, line, eol, match);
-	else if (p->pcre1_regexp)
+	if (p->pcre1_regexp)
 		hit = !pcre1match(p, line, eol, match, eflags);
 	else if (p->pcre2_pattern)
 		hit = !pcre2match(p, line, eol, match, eflags);
diff --git a/grep.h b/grep.h
index 9c8797a..05dc1bb 100644
--- a/grep.h
+++ b/grep.h
@@ -6,24 +6,9 @@
 #ifndef PCRE_NO_UTF8_CHECK
 #define PCRE_NO_UTF8_CHECK 0
 #endif
-#ifdef PCRE_CONFIG_JIT
-#if PCRE_MAJOR >= 8 && PCRE_MINOR >= 32
-#ifndef NO_LIBPCRE1_JIT
-#define GIT_PCRE1_USE_JIT
-#define GIT_PCRE_STUDY_JIT_COMPILE PCRE_STUDY_JIT_COMPILE
-#endif
-#endif
-#endif
-#ifndef GIT_PCRE_STUDY_JIT_COMPILE
-#define GIT_PCRE_STUDY_JIT_COMPILE 0
-#endif
-#if PCRE_MAJOR <= 8 && PCRE_MINOR < 20
-typedef int pcre_jit_stack;
-#endif
 #else
 typedef int pcre;
 typedef int pcre_extra;
-typedef int pcre_jit_stack;
 #endif
 #ifdef USE_LIBPCRE2
 #define PCRE2_CODE_UNIT_WIDTH 8
@@ -32,10 +17,7 @@
 typedef int pcre2_code;
 typedef int pcre2_match_data;
 typedef int pcre2_compile_context;
-typedef int pcre2_match_context;
-typedef int pcre2_jit_stack;
 #endif
-#include "kwset.h"
 #include "thread-utils.h"
 #include "userdiff.h"
 
@@ -91,17 +73,14 @@
 	regex_t regexp;
 	pcre *pcre1_regexp;
 	pcre_extra *pcre1_extra_info;
-	pcre_jit_stack *pcre1_jit_stack;
 	const unsigned char *pcre1_tables;
 	int pcre1_jit_on;
 	pcre2_code *pcre2_pattern;
 	pcre2_match_data *pcre2_match_data;
 	pcre2_compile_context *pcre2_compile_context;
-	pcre2_match_context *pcre2_match_context;
-	pcre2_jit_stack *pcre2_jit_stack;
 	uint32_t pcre2_jit_on;
-	kwset_t kws;
 	unsigned fixed:1;
+	unsigned is_fixed:1;
 	unsigned ignore_case:1;
 	unsigned word_regexp:1;
 };
@@ -176,6 +155,7 @@
 	int funcbody;
 	int extended_regexp_option;
 	int pattern_type_option;
+	int ignore_locale;
 	char colors[NR_GREP_COLORS][COLOR_MAXLEN];
 	unsigned pre_context;
 	unsigned post_context;
diff --git a/hashmap.c b/hashmap.c
index d42f01f..39c1311 100644
--- a/hashmap.c
+++ b/hashmap.c
@@ -140,8 +140,8 @@
 }
 
 static int always_equal(const void *unused_cmp_data,
-			const void *unused1,
-			const void *unused2,
+			const struct hashmap_entry *unused1,
+			const struct hashmap_entry *unused2,
 			const void *unused_keydata)
 {
 	return 0;
@@ -171,41 +171,49 @@
 	map->do_count_items = 1;
 }
 
-void hashmap_free(struct hashmap *map, int free_entries)
+void hashmap_free_(struct hashmap *map, ssize_t entry_offset)
 {
 	if (!map || !map->table)
 		return;
-	if (free_entries) {
+	if (entry_offset >= 0) { /* called by hashmap_free_entries */
 		struct hashmap_iter iter;
 		struct hashmap_entry *e;
+
 		hashmap_iter_init(map, &iter);
 		while ((e = hashmap_iter_next(&iter)))
-			free(e);
+			/*
+			 * like container_of, but using caller-calculated
+			 * offset (caller being hashmap_free_entries)
+			 */
+			free((char *)e - entry_offset);
 	}
 	free(map->table);
 	memset(map, 0, sizeof(*map));
 }
 
-void *hashmap_get(const struct hashmap *map, const void *key, const void *keydata)
+struct hashmap_entry *hashmap_get(const struct hashmap *map,
+				const struct hashmap_entry *key,
+				const void *keydata)
 {
 	return *find_entry_ptr(map, key, keydata);
 }
 
-void *hashmap_get_next(const struct hashmap *map, const void *entry)
+struct hashmap_entry *hashmap_get_next(const struct hashmap *map,
+			const struct hashmap_entry *entry)
 {
-	struct hashmap_entry *e = ((struct hashmap_entry *) entry)->next;
+	struct hashmap_entry *e = entry->next;
 	for (; e; e = e->next)
 		if (entry_equals(map, entry, e, NULL))
 			return e;
 	return NULL;
 }
 
-void hashmap_add(struct hashmap *map, void *entry)
+void hashmap_add(struct hashmap *map, struct hashmap_entry *entry)
 {
 	unsigned int b = bucket(map, entry);
 
 	/* add entry */
-	((struct hashmap_entry *) entry)->next = map->table[b];
+	entry->next = map->table[b];
 	map->table[b] = entry;
 
 	/* fix size and rehash if appropriate */
@@ -216,7 +224,9 @@
 	}
 }
 
-void *hashmap_remove(struct hashmap *map, const void *key, const void *keydata)
+struct hashmap_entry *hashmap_remove(struct hashmap *map,
+					const struct hashmap_entry *key,
+					const void *keydata)
 {
 	struct hashmap_entry *old;
 	struct hashmap_entry **e = find_entry_ptr(map, key, keydata);
@@ -238,7 +248,8 @@
 	return old;
 }
 
-void *hashmap_put(struct hashmap *map, void *entry)
+struct hashmap_entry *hashmap_put(struct hashmap *map,
+				struct hashmap_entry *entry)
 {
 	struct hashmap_entry *old = hashmap_remove(map, entry, NULL);
 	hashmap_add(map, entry);
@@ -252,7 +263,7 @@
 	iter->next = NULL;
 }
 
-void *hashmap_iter_next(struct hashmap_iter *iter)
+struct hashmap_entry *hashmap_iter_next(struct hashmap_iter *iter)
 {
 	struct hashmap_entry *current = iter->next;
 	for (;;) {
@@ -275,10 +286,15 @@
 };
 
 static int pool_entry_cmp(const void *unused_cmp_data,
-			  const struct pool_entry *e1,
-			  const struct pool_entry *e2,
-			  const unsigned char *keydata)
+			  const struct hashmap_entry *eptr,
+			  const struct hashmap_entry *entry_or_key,
+			  const void *keydata)
 {
+	const struct pool_entry *e1, *e2;
+
+	e1 = container_of(eptr, const struct pool_entry, ent);
+	e2 = container_of(entry_or_key, const struct pool_entry, ent);
+
 	return e1->data != keydata &&
 	       (e1->len != e2->len || memcmp(e1->data, keydata, e1->len));
 }
@@ -290,18 +306,18 @@
 
 	/* initialize string pool hashmap */
 	if (!map.tablesize)
-		hashmap_init(&map, (hashmap_cmp_fn) pool_entry_cmp, NULL, 0);
+		hashmap_init(&map, pool_entry_cmp, NULL, 0);
 
 	/* lookup interned string in pool */
-	hashmap_entry_init(&key, memhash(data, len));
+	hashmap_entry_init(&key.ent, memhash(data, len));
 	key.len = len;
-	e = hashmap_get(&map, &key, data);
+	e = hashmap_get_entry(&map, &key, ent, data);
 	if (!e) {
 		/* not found: create it */
 		FLEX_ALLOC_MEM(e, data, data, len);
-		hashmap_entry_init(e, key.ent.hash);
+		hashmap_entry_init(&e->ent, key.ent.hash);
 		e->len = len;
-		hashmap_add(&map, e);
+		hashmap_add(&map, &e->ent);
 	}
 	return e->data;
 }
diff --git a/hashmap.h b/hashmap.h
index 8424911..bd27015 100644
--- a/hashmap.h
+++ b/hashmap.h
@@ -13,7 +13,7 @@
  *
  * struct hashmap map;
  * struct long2string {
- *     struct hashmap_entry ent; // must be the first member!
+ *     struct hashmap_entry ent;
  *     long key;
  *     char value[FLEX_ARRAY];   // be careful with allocating on stack!
  * };
@@ -21,12 +21,16 @@
  * #define COMPARE_VALUE 1
  *
  * static int long2string_cmp(const void *hashmap_cmp_fn_data,
- *                            const struct long2string *e1,
- *                            const struct long2string *e2,
+ *                            const struct hashmap_entry *eptr,
+ *                            const struct hashmap_entry *entry_or_key,
  *                            const void *keydata)
  * {
  *     const char *string = keydata;
  *     unsigned flags = *(unsigned *)hashmap_cmp_fn_data;
+ *     const struct long2string *e1, *e2;
+ *
+ *     e1 = container_of(eptr, const struct long2string, ent);
+ *     e2 = container_of(entry_or_key, const struct long2string, ent);
  *
  *     if (flags & COMPARE_VALUE)
  *         return e1->key != e2->key ||
@@ -41,54 +45,58 @@
  *     char value[255], action[32];
  *     unsigned flags = 0;
  *
- *     hashmap_init(&map, (hashmap_cmp_fn) long2string_cmp, &flags, 0);
+ *     hashmap_init(&map, long2string_cmp, &flags, 0);
  *
  *     while (scanf("%s %ld %s", action, &key, value)) {
  *
  *         if (!strcmp("add", action)) {
  *             struct long2string *e;
  *             FLEX_ALLOC_STR(e, value, value);
- *             hashmap_entry_init(e, memhash(&key, sizeof(long)));
+ *             hashmap_entry_init(&e->ent, memhash(&key, sizeof(long)));
  *             e->key = key;
- *             hashmap_add(&map, e);
+ *             hashmap_add(&map, &e->ent);
  *         }
  *
  *         if (!strcmp("print_all_by_key", action)) {
  *             struct long2string k, *e;
- *             hashmap_entry_init(&k, memhash(&key, sizeof(long)));
+ *             hashmap_entry_init(&k->ent, memhash(&key, sizeof(long)));
  *             k.key = key;
  *
  *             flags &= ~COMPARE_VALUE;
- *             e = hashmap_get(&map, &k, NULL);
+ *             e = hashmap_get_entry(&map, &k, ent, NULL);
  *             if (e) {
  *                 printf("first: %ld %s\n", e->key, e->value);
- *                 while ((e = hashmap_get_next(&map, e)))
+ *                 while ((e = hashmap_get_next_entry(&map, e,
+ *                                              struct long2string, ent))) {
  *                     printf("found more: %ld %s\n", e->key, e->value);
+ *                 }
  *             }
  *         }
  *
  *         if (!strcmp("has_exact_match", action)) {
  *             struct long2string *e;
  *             FLEX_ALLOC_STR(e, value, value);
- *             hashmap_entry_init(e, memhash(&key, sizeof(long)));
+ *             hashmap_entry_init(&e->ent, memhash(&key, sizeof(long)));
  *             e->key = key;
  *
  *             flags |= COMPARE_VALUE;
- *             printf("%sfound\n", hashmap_get(&map, e, NULL) ? "" : "not ");
+ *             printf("%sfound\n",
+ *                    hashmap_get(&map, &e->ent, NULL) ? "" : "not ");
  *             free(e);
  *         }
  *
  *         if (!strcmp("has_exact_match_no_heap_alloc", action)) {
  *             struct long2string k;
- *             hashmap_entry_init(&k, memhash(&key, sizeof(long)));
+ *             hashmap_entry_init(&k->ent, memhash(&key, sizeof(long)));
  *             k.key = key;
  *
  *             flags |= COMPARE_VALUE;
- *             printf("%sfound\n", hashmap_get(&map, &k, value) ? "" : "not ");
+ *             printf("%sfound\n",
+ *                    hashmap_get(&map, &k->ent, value) ? "" : "not ");
  *         }
  *
  *         if (!strcmp("end", action)) {
- *             hashmap_free(&map, 1);
+ *             hashmap_free_entries(&map, struct long2string, ent);
  *             break;
  *         }
  *     }
@@ -133,7 +141,7 @@
 
 /*
  * struct hashmap_entry is an opaque structure representing an entry in the
- * hash table, which must be used as first member of user data structures.
+ * hash table.
  * Ideally it should be followed by an int-sized member to prevent unused
  * memory on 64-bit systems due to alignment.
  */
@@ -168,7 +176,8 @@
  * The `hashmap_cmp_fn_data` entry is the pointer given in the init function.
  */
 typedef int (*hashmap_cmp_fn)(const void *hashmap_cmp_fn_data,
-			      const void *entry, const void *entry_or_key,
+			      const struct hashmap_entry *entry,
+			      const struct hashmap_entry *entry_or_key,
 			      const void *keydata);
 
 /*
@@ -223,13 +232,20 @@
 			 const void *equals_function_data,
 			 size_t initial_size);
 
+/* internal function for freeing hashmap */
+void hashmap_free_(struct hashmap *map, ssize_t offset);
+
 /*
- * Frees a hashmap structure and allocated memory.
- *
- * If `free_entries` is true, each hashmap_entry in the map is freed as well
- * using stdlibs free().
+ * Frees a hashmap structure and allocated memory, leaves entries undisturbed
  */
-void hashmap_free(struct hashmap *map, int free_entries);
+#define hashmap_free(map) hashmap_free_(map, -1)
+
+/*
+ * Frees @map and all entries.  @type is the struct type of the entry
+ * where @member is the hashmap_entry struct used to associate with @map
+ */
+#define hashmap_free_entries(map, type, member) \
+	hashmap_free_(map, offsetof(type, member));
 
 /* hashmap_entry functions */
 
@@ -244,9 +260,9 @@
  * your structure was allocated with xmalloc(), you can just free(3) it,
  * and if it is on stack, you can just let it go out of scope).
  */
-static inline void hashmap_entry_init(void *entry, unsigned int hash)
+static inline void hashmap_entry_init(struct hashmap_entry *e,
+					unsigned int hash)
 {
-	struct hashmap_entry *e = entry;
 	e->hash = hash;
 	e->next = NULL;
 }
@@ -286,8 +302,9 @@
  * If an entry with matching hash code is found, `key` and `keydata` are passed
  * to `hashmap_cmp_fn` to decide whether the entry matches the key.
  */
-void *hashmap_get(const struct hashmap *map, const void *key,
-			 const void *keydata);
+struct hashmap_entry *hashmap_get(const struct hashmap *map,
+				const struct hashmap_entry *key,
+				const void *keydata);
 
 /*
  * Returns the hashmap entry for the specified hash code and key data,
@@ -301,9 +318,10 @@
  * `entry_or_key` parameter of `hashmap_cmp_fn` points to a hashmap_entry
  * structure that should not be used in the comparison.
  */
-static inline void *hashmap_get_from_hash(const struct hashmap *map,
-					  unsigned int hash,
-					  const void *keydata)
+static inline struct hashmap_entry *hashmap_get_from_hash(
+					const struct hashmap *map,
+					unsigned int hash,
+					const void *keydata)
 {
 	struct hashmap_entry key;
 	hashmap_entry_init(&key, hash);
@@ -318,7 +336,8 @@
  * `entry` is the hashmap_entry to start the search from, obtained via a previous
  * call to `hashmap_get` or `hashmap_get_next`.
  */
-void *hashmap_get_next(const struct hashmap *map, const void *entry);
+struct hashmap_entry *hashmap_get_next(const struct hashmap *map,
+			const struct hashmap_entry *entry);
 
 /*
  * Adds a hashmap entry. This allows to add duplicate entries (i.e.
@@ -327,7 +346,7 @@
  * `map` is the hashmap structure.
  * `entry` is the entry to add.
  */
-void hashmap_add(struct hashmap *map, void *entry);
+void hashmap_add(struct hashmap *map, struct hashmap_entry *entry);
 
 /*
  * Adds or replaces a hashmap entry. If the hashmap contains duplicate
@@ -337,7 +356,20 @@
  * `entry` is the entry to add or replace.
  * Returns the replaced entry, or NULL if not found (i.e. the entry was added).
  */
-void *hashmap_put(struct hashmap *map, void *entry);
+struct hashmap_entry *hashmap_put(struct hashmap *map,
+				struct hashmap_entry *entry);
+
+/*
+ * Adds or replaces a hashmap entry contained within @keyvar,
+ * where @keyvar is a pointer to a struct containing a
+ * "struct hashmap_entry" @member.
+ *
+ * Returns the replaced pointer which is of the same type as @keyvar,
+ * or NULL if not found.
+ */
+#define hashmap_put_entry(map, keyvar, member) \
+	container_of_or_null_offset(hashmap_put(map, &(keyvar)->member), \
+				OFFSETOF_VAR(keyvar, member))
 
 /*
  * Removes a hashmap entry matching the specified key. If the hashmap contains
@@ -346,8 +378,24 @@
  *
  * Argument explanation is the same as in `hashmap_get`.
  */
-void *hashmap_remove(struct hashmap *map, const void *key,
-		const void *keydata);
+struct hashmap_entry *hashmap_remove(struct hashmap *map,
+					const struct hashmap_entry *key,
+					const void *keydata);
+
+/*
+ * Removes a hashmap entry contained within @keyvar,
+ * where @keyvar is a pointer to a struct containing a
+ * "struct hashmap_entry" @member.
+ *
+ * See `hashmap_get` for an explanation of @keydata
+ *
+ * Returns the replaced pointer which is of the same type as @keyvar,
+ * or NULL if not found.
+ */
+#define hashmap_remove_entry(map, keyvar, member, keydata) \
+	container_of_or_null_offset( \
+			hashmap_remove(map, &(keyvar)->member, keydata), \
+			OFFSETOF_VAR(keyvar, member))
 
 /*
  * Returns the `bucket` an entry is stored in.
@@ -370,10 +418,10 @@
 void hashmap_iter_init(struct hashmap *map, struct hashmap_iter *iter);
 
 /* Returns the next hashmap_entry, or NULL if there are no more entries. */
-void *hashmap_iter_next(struct hashmap_iter *iter);
+struct hashmap_entry *hashmap_iter_next(struct hashmap_iter *iter);
 
 /* Initializes the iterator and returns the first entry, if any. */
-static inline void *hashmap_iter_first(struct hashmap *map,
+static inline struct hashmap_entry *hashmap_iter_first(struct hashmap *map,
 		struct hashmap_iter *iter)
 {
 	hashmap_iter_init(map, iter);
@@ -381,6 +429,64 @@
 }
 
 /*
+ * returns the first entry in @map using @iter, where the entry is of
+ * @type (e.g. "struct foo") and @member is the name of the
+ * "struct hashmap_entry" in @type
+ */
+#define hashmap_iter_first_entry(map, iter, type, member) \
+	container_of_or_null(hashmap_iter_first(map, iter), type, member)
+
+/* internal macro for hashmap_for_each_entry */
+#define hashmap_iter_next_entry_offset(iter, offset) \
+	container_of_or_null_offset(hashmap_iter_next(iter), offset)
+
+/* internal macro for hashmap_for_each_entry */
+#define hashmap_iter_first_entry_offset(map, iter, offset) \
+	container_of_or_null_offset(hashmap_iter_first(map, iter), offset)
+
+/*
+ * iterate through @map using @iter, @var is a pointer to a type
+ * containing a @member which is a "struct hashmap_entry"
+ */
+#define hashmap_for_each_entry(map, iter, var, member) \
+	for (var = hashmap_iter_first_entry_offset(map, iter, \
+						OFFSETOF_VAR(var, member)); \
+		var; \
+		var = hashmap_iter_next_entry_offset(iter, \
+						OFFSETOF_VAR(var, member)))
+
+/*
+ * returns a pointer of type matching @keyvar, or NULL if nothing found.
+ * @keyvar is a pointer to a struct containing a
+ * "struct hashmap_entry" @member.
+ */
+#define hashmap_get_entry(map, keyvar, member, keydata) \
+	container_of_or_null_offset( \
+				hashmap_get(map, &(keyvar)->member, keydata), \
+				OFFSETOF_VAR(keyvar, member))
+
+#define hashmap_get_entry_from_hash(map, hash, keydata, type, member) \
+	container_of_or_null(hashmap_get_from_hash(map, hash, keydata), \
+				type, member)
+/*
+ * returns the next equal pointer to @var, or NULL if not found.
+ * @var is a pointer of any type containing "struct hashmap_entry"
+ * @member is the name of the "struct hashmap_entry" field
+ */
+#define hashmap_get_next_entry(map, var, member) \
+	container_of_or_null_offset(hashmap_get_next(map, &(var)->member), \
+				OFFSETOF_VAR(var, member))
+
+/*
+ * iterate @map starting from @var, where @var is a pointer of @type
+ * and @member is the name of the "struct hashmap_entry" field in @type
+ */
+#define hashmap_for_each_entry_from(map, var, member) \
+	for (; \
+		var; \
+		var = hashmap_get_next_entry(map, var, member))
+
+/*
  * Disable item counting and automatic rehashing when adding/removing items.
  *
  * Normally, the hashmap keeps track of the number of items in the map
diff --git a/merge-recursive.c b/merge-recursive.c
index 22a12cf..42be7c9 100644
--- a/merge-recursive.c
+++ b/merge-recursive.c
@@ -4,30 +4,40 @@
  * The thieves were Alex Riesen and Johannes Schindelin, in June/July 2006
  */
 #include "cache.h"
-#include "config.h"
+#include "merge-recursive.h"
+
 #include "advice.h"
-#include "lockfile.h"
-#include "cache-tree.h"
-#include "object-store.h"
-#include "repository.h"
-#include "commit.h"
+#include "alloc.h"
+#include "attr.h"
 #include "blob.h"
 #include "builtin.h"
-#include "tree-walk.h"
+#include "cache-tree.h"
+#include "commit.h"
+#include "commit-reach.h"
+#include "config.h"
 #include "diff.h"
 #include "diffcore.h"
-#include "tag.h"
-#include "alloc.h"
-#include "unpack-trees.h"
-#include "string-list.h"
-#include "xdiff-interface.h"
-#include "ll-merge.h"
-#include "attr.h"
-#include "merge-recursive.h"
 #include "dir.h"
-#include "submodule.h"
+#include "ll-merge.h"
+#include "lockfile.h"
+#include "object-store.h"
+#include "repository.h"
 #include "revision.h"
-#include "commit-reach.h"
+#include "string-list.h"
+#include "submodule.h"
+#include "tag.h"
+#include "tree-walk.h"
+#include "unpack-trees.h"
+#include "xdiff-interface.h"
+
+struct merge_options_internal {
+	int call_depth;
+	int needed_rename_limit;
+	struct hashmap current_file_dir_set;
+	struct string_list df_conflict_file_set;
+	struct unpack_trees_options unpack_opts;
+	struct index_state orig_index;
+};
 
 struct path_hashmap_entry {
 	struct hashmap_entry e;
@@ -35,14 +45,16 @@
 };
 
 static int path_hashmap_cmp(const void *cmp_data,
-			    const void *entry,
-			    const void *entry_or_key,
+			    const struct hashmap_entry *eptr,
+			    const struct hashmap_entry *entry_or_key,
 			    const void *keydata)
 {
-	const struct path_hashmap_entry *a = entry;
-	const struct path_hashmap_entry *b = entry_or_key;
+	const struct path_hashmap_entry *a, *b;
 	const char *key = keydata;
 
+	a = container_of(eptr, const struct path_hashmap_entry, e);
+	b = container_of(entry_or_key, const struct path_hashmap_entry, e);
+
 	if (ignore_case)
 		return strcasecmp(a->path, key ? key : b->path);
 	else
@@ -54,6 +66,24 @@
 	return ignore_case ? strihash(path) : strhash(path);
 }
 
+/*
+ * For dir_rename_entry, directory names are stored as a full path from the
+ * toplevel of the repository and do not include a trailing '/'.  Also:
+ *
+ *   dir:                original name of directory being renamed
+ *   non_unique_new_dir: if true, could not determine new_dir
+ *   new_dir:            final name of directory being renamed
+ *   possible_new_dirs:  temporary used to help determine new_dir; see comments
+ *                       in get_directory_renames() for details
+ */
+struct dir_rename_entry {
+	struct hashmap_entry ent;
+	char *dir;
+	unsigned non_unique_new_dir:1;
+	struct strbuf new_dir;
+	struct string_list possible_new_dirs;
+};
+
 static struct dir_rename_entry *dir_rename_find_entry(struct hashmap *hashmap,
 						      char *dir)
 {
@@ -61,18 +91,20 @@
 
 	if (dir == NULL)
 		return NULL;
-	hashmap_entry_init(&key, strhash(dir));
+	hashmap_entry_init(&key.ent, strhash(dir));
 	key.dir = dir;
-	return hashmap_get(hashmap, &key, NULL);
+	return hashmap_get_entry(hashmap, &key, ent, NULL);
 }
 
 static int dir_rename_cmp(const void *unused_cmp_data,
-			  const void *entry,
-			  const void *entry_or_key,
+			  const struct hashmap_entry *eptr,
+			  const struct hashmap_entry *entry_or_key,
 			  const void *unused_keydata)
 {
-	const struct dir_rename_entry *e1 = entry;
-	const struct dir_rename_entry *e2 = entry_or_key;
+	const struct dir_rename_entry *e1, *e2;
+
+	e1 = container_of(eptr, const struct dir_rename_entry, ent);
+	e2 = container_of(entry_or_key, const struct dir_rename_entry, ent);
 
 	return strcmp(e1->dir, e2->dir);
 }
@@ -85,34 +117,46 @@
 static void dir_rename_entry_init(struct dir_rename_entry *entry,
 				  char *directory)
 {
-	hashmap_entry_init(entry, strhash(directory));
+	hashmap_entry_init(&entry->ent, strhash(directory));
 	entry->dir = directory;
 	entry->non_unique_new_dir = 0;
 	strbuf_init(&entry->new_dir, 0);
 	string_list_init(&entry->possible_new_dirs, 0);
 }
 
+struct collision_entry {
+	struct hashmap_entry ent;
+	char *target_file;
+	struct string_list source_files;
+	unsigned reported_already:1;
+};
+
 static struct collision_entry *collision_find_entry(struct hashmap *hashmap,
 						    char *target_file)
 {
 	struct collision_entry key;
 
-	hashmap_entry_init(&key, strhash(target_file));
+	hashmap_entry_init(&key.ent, strhash(target_file));
 	key.target_file = target_file;
-	return hashmap_get(hashmap, &key, NULL);
+	return hashmap_get_entry(hashmap, &key, ent, NULL);
 }
 
-static int collision_cmp(void *unused_cmp_data,
-			 const struct collision_entry *e1,
-			 const struct collision_entry *e2,
+static int collision_cmp(const void *unused_cmp_data,
+			 const struct hashmap_entry *eptr,
+			 const struct hashmap_entry *entry_or_key,
 			 const void *unused_keydata)
 {
+	const struct collision_entry *e1, *e2;
+
+	e1 = container_of(eptr, const struct collision_entry, ent);
+	e2 = container_of(entry_or_key, const struct collision_entry, ent);
+
 	return strcmp(e1->target_file, e2->target_file);
 }
 
 static void collision_init(struct hashmap *map)
 {
-	hashmap_init(map, (hashmap_cmp_fn) collision_cmp, NULL, 0);
+	hashmap_init(map, collision_cmp, NULL, 0);
 }
 
 static void flush_output(struct merge_options *opt)
@@ -284,7 +328,8 @@
 
 static int show(struct merge_options *opt, int v)
 {
-	return (!opt->call_depth && opt->verbosity >= v) || opt->verbosity >= 5;
+	return (!opt->priv->call_depth && opt->verbosity >= v) ||
+		opt->verbosity >= 5;
 }
 
 __attribute__((format (printf, 3, 4)))
@@ -295,7 +340,7 @@
 	if (!show(opt, v))
 		return;
 
-	strbuf_addchars(&opt->obuf, ' ', opt->call_depth * 2);
+	strbuf_addchars(&opt->obuf, ' ', opt->priv->call_depth * 2);
 
 	va_start(ap, fmt);
 	strbuf_vaddf(&opt->obuf, fmt, ap);
@@ -310,7 +355,7 @@
 {
 	struct merge_remote_desc *desc;
 
-	strbuf_addchars(&opt->obuf, ' ', opt->call_depth * 2);
+	strbuf_addchars(&opt->obuf, ' ', opt->priv->call_depth * 2);
 	desc = merge_remote_util(commit);
 	if (desc)
 		strbuf_addf(&opt->obuf, "virtual %s\n", desc->name);
@@ -358,6 +403,11 @@
 	return ret;
 }
 
+static inline int merge_detect_rename(struct merge_options *opt)
+{
+	return (opt->detect_renames >= 0) ? opt->detect_renames : 1;
+}
+
 static void init_tree_desc_from_tree(struct tree_desc *desc, struct tree *tree)
 {
 	parse_tree(tree);
@@ -373,74 +423,43 @@
 	struct tree_desc t[3];
 	struct index_state tmp_index = { NULL };
 
-	memset(&opt->unpack_opts, 0, sizeof(opt->unpack_opts));
-	if (opt->call_depth)
-		opt->unpack_opts.index_only = 1;
+	memset(&opt->priv->unpack_opts, 0, sizeof(opt->priv->unpack_opts));
+	if (opt->priv->call_depth)
+		opt->priv->unpack_opts.index_only = 1;
 	else
-		opt->unpack_opts.update = 1;
-	opt->unpack_opts.merge = 1;
-	opt->unpack_opts.head_idx = 2;
-	opt->unpack_opts.fn = threeway_merge;
-	opt->unpack_opts.src_index = opt->repo->index;
-	opt->unpack_opts.dst_index = &tmp_index;
-	opt->unpack_opts.aggressive = !merge_detect_rename(opt);
-	setup_unpack_trees_porcelain(&opt->unpack_opts, "merge");
+		opt->priv->unpack_opts.update = 1;
+	opt->priv->unpack_opts.merge = 1;
+	opt->priv->unpack_opts.head_idx = 2;
+	opt->priv->unpack_opts.fn = threeway_merge;
+	opt->priv->unpack_opts.src_index = opt->repo->index;
+	opt->priv->unpack_opts.dst_index = &tmp_index;
+	opt->priv->unpack_opts.aggressive = !merge_detect_rename(opt);
+	setup_unpack_trees_porcelain(&opt->priv->unpack_opts, "merge");
 
 	init_tree_desc_from_tree(t+0, common);
 	init_tree_desc_from_tree(t+1, head);
 	init_tree_desc_from_tree(t+2, merge);
 
-	rc = unpack_trees(3, t, &opt->unpack_opts);
+	rc = unpack_trees(3, t, &opt->priv->unpack_opts);
 	cache_tree_free(&opt->repo->index->cache_tree);
 
 	/*
-	 * Update opt->repo->index to match the new results, AFTER saving a copy
-	 * in opt->orig_index.  Update src_index to point to the saved copy.
-	 * (verify_uptodate() checks src_index, and the original index is
-	 * the one that had the necessary modification timestamps.)
+	 * Update opt->repo->index to match the new results, AFTER saving a
+	 * copy in opt->priv->orig_index.  Update src_index to point to the
+	 * saved copy.  (verify_uptodate() checks src_index, and the original
+	 * index is the one that had the necessary modification timestamps.)
 	 */
-	opt->orig_index = *opt->repo->index;
+	opt->priv->orig_index = *opt->repo->index;
 	*opt->repo->index = tmp_index;
-	opt->unpack_opts.src_index = &opt->orig_index;
+	opt->priv->unpack_opts.src_index = &opt->priv->orig_index;
 
 	return rc;
 }
 
 static void unpack_trees_finish(struct merge_options *opt)
 {
-	discard_index(&opt->orig_index);
-	clear_unpack_trees_porcelain(&opt->unpack_opts);
-}
-
-struct tree *write_tree_from_memory(struct merge_options *opt)
-{
-	struct tree *result = NULL;
-	struct index_state *istate = opt->repo->index;
-
-	if (unmerged_index(istate)) {
-		int i;
-		fprintf(stderr, "BUG: There are unmerged index entries:\n");
-		for (i = 0; i < istate->cache_nr; i++) {
-			const struct cache_entry *ce = istate->cache[i];
-			if (ce_stage(ce))
-				fprintf(stderr, "BUG: %d %.*s\n", ce_stage(ce),
-					(int)ce_namelen(ce), ce->name);
-		}
-		BUG("unmerged index entries in merge-recursive.c");
-	}
-
-	if (!istate->cache_tree)
-		istate->cache_tree = cache_tree();
-
-	if (!cache_tree_fully_valid(istate->cache_tree) &&
-	    cache_tree_update(istate, 0) < 0) {
-		err(opt, _("error building trees"));
-		return NULL;
-	}
-
-	result = lookup_tree(opt->repo, &istate->cache_tree->oid);
-
-	return result;
+	discard_index(&opt->priv->orig_index);
+	clear_unpack_trees_porcelain(&opt->priv->unpack_opts);
 }
 
 static int save_files_dirs(const struct object_id *oid,
@@ -454,8 +473,8 @@
 	strbuf_addstr(base, path);
 
 	FLEX_ALLOC_MEM(entry, path, base->buf, base->len);
-	hashmap_entry_init(entry, path_hash(entry->path));
-	hashmap_add(&opt->current_file_dir_set, entry);
+	hashmap_entry_init(&entry->e, path_hash(entry->path));
+	hashmap_add(&opt->priv->current_file_dir_set, &entry->e);
 
 	strbuf_setlen(base, baselen);
 	return (S_ISDIR(mode) ? READ_TREE_RECURSIVE : 0);
@@ -586,7 +605,7 @@
 	 * If we're merging merge-bases, we don't want to bother with
 	 * any working directory changes.
 	 */
-	if (opt->call_depth)
+	if (opt->priv->call_depth)
 		return;
 
 	/* Ensure D/F conflicts are adjacent in the entries list. */
@@ -598,7 +617,7 @@
 	df_sorted_entries.cmp = string_list_df_name_compare;
 	string_list_sort(&df_sorted_entries);
 
-	string_list_clear(&opt->df_conflict_file_set, 1);
+	string_list_clear(&opt->priv->df_conflict_file_set, 1);
 	for (i = 0; i < df_sorted_entries.nr; i++) {
 		const char *path = df_sorted_entries.items[i].string;
 		int len = strlen(path);
@@ -614,7 +633,7 @@
 		    len > last_len &&
 		    memcmp(path, last_file, last_len) == 0 &&
 		    path[last_len] == '/') {
-			string_list_insert(&opt->df_conflict_file_set, last_file);
+			string_list_insert(&opt->priv->df_conflict_file_set, last_file);
 		}
 
 		/*
@@ -681,8 +700,8 @@
 static int remove_file(struct merge_options *opt, int clean,
 		       const char *path, int no_wd)
 {
-	int update_cache = opt->call_depth || clean;
-	int update_working_directory = !opt->call_depth && !no_wd;
+	int update_cache = opt->priv->call_depth || clean;
+	int update_working_directory = !opt->priv->call_depth && !no_wd;
 
 	if (update_cache) {
 		if (remove_file_from_index(opt->repo->index, path))
@@ -712,7 +731,9 @@
 			out->buf[i] = '_';
 }
 
-static char *unique_path(struct merge_options *opt, const char *path, const char *branch)
+static char *unique_path(struct merge_options *opt,
+			 const char *path,
+			 const char *branch)
 {
 	struct path_hashmap_entry *entry;
 	struct strbuf newpath = STRBUF_INIT;
@@ -723,16 +744,16 @@
 	add_flattened_path(&newpath, branch);
 
 	base_len = newpath.len;
-	while (hashmap_get_from_hash(&opt->current_file_dir_set,
+	while (hashmap_get_from_hash(&opt->priv->current_file_dir_set,
 				     path_hash(newpath.buf), newpath.buf) ||
-	       (!opt->call_depth && file_exists(newpath.buf))) {
+	       (!opt->priv->call_depth && file_exists(newpath.buf))) {
 		strbuf_setlen(&newpath, base_len);
 		strbuf_addf(&newpath, "_%d", suffix++);
 	}
 
 	FLEX_ALLOC_MEM(entry, path, newpath.buf, newpath.len);
-	hashmap_entry_init(entry, path_hash(entry->path));
-	hashmap_add(&opt->current_file_dir_set, entry);
+	hashmap_entry_init(&entry->e, path_hash(entry->path));
+	hashmap_add(&opt->priv->current_file_dir_set, &entry->e);
 	return strbuf_detach(&newpath, NULL);
 }
 
@@ -775,7 +796,7 @@
 static int was_tracked_and_matches(struct merge_options *opt, const char *path,
 				   const struct diff_filespec *blob)
 {
-	int pos = index_name_pos(&opt->orig_index, path, strlen(path));
+	int pos = index_name_pos(&opt->priv->orig_index, path, strlen(path));
 	struct cache_entry *ce;
 
 	if (0 > pos)
@@ -783,7 +804,7 @@
 		return 0;
 
 	/* See if the file we were tracking before matches */
-	ce = opt->orig_index.cache[pos];
+	ce = opt->priv->orig_index.cache[pos];
 	return (oid_eq(&ce->oid, &blob->oid) && ce->ce_mode == blob->mode);
 }
 
@@ -792,7 +813,7 @@
  */
 static int was_tracked(struct merge_options *opt, const char *path)
 {
-	int pos = index_name_pos(&opt->orig_index, path, strlen(path));
+	int pos = index_name_pos(&opt->priv->orig_index, path, strlen(path));
 
 	if (0 <= pos)
 		/* we were tracking this path before the merge */
@@ -849,12 +870,12 @@
 	struct cache_entry *ce;
 	int dirty = 1;
 
-	if (opt->call_depth || !was_tracked(opt, path))
+	if (opt->priv->call_depth || !was_tracked(opt, path))
 		return !dirty;
 
-	ce = index_file_exists(opt->unpack_opts.src_index,
+	ce = index_file_exists(opt->priv->unpack_opts.src_index,
 			       path, strlen(path), ignore_case);
-	dirty = verify_uptodate(ce, &opt->unpack_opts) != 0;
+	dirty = verify_uptodate(ce, &opt->priv->unpack_opts) != 0;
 	return dirty;
 }
 
@@ -864,8 +885,8 @@
 	const char *msg = _("failed to create path '%s'%s");
 
 	/* Unlink any D/F conflict files that are in the way */
-	for (i = 0; i < opt->df_conflict_file_set.nr; i++) {
-		const char *df_path = opt->df_conflict_file_set.items[i].string;
+	for (i = 0; i < opt->priv->df_conflict_file_set.nr; i++) {
+		const char *df_path = opt->priv->df_conflict_file_set.items[i].string;
 		size_t pathlen = strlen(path);
 		size_t df_pathlen = strlen(df_path);
 		if (df_pathlen < pathlen &&
@@ -875,7 +896,7 @@
 			       _("Removing %s to make room for subdirectory\n"),
 			       df_path);
 			unlink(df_path);
-			unsorted_string_list_delete_item(&opt->df_conflict_file_set,
+			unsorted_string_list_delete_item(&opt->priv->df_conflict_file_set,
 							 i, 0);
 			break;
 		}
@@ -916,7 +937,7 @@
 {
 	int ret = 0;
 
-	if (opt->call_depth)
+	if (opt->priv->call_depth)
 		update_wd = 0;
 
 	if (update_wd) {
@@ -935,9 +956,11 @@
 		}
 
 		buf = read_object_file(&contents->oid, &type, &size);
-		if (!buf)
-			return err(opt, _("cannot read object %s '%s'"),
-				   oid_to_hex(&contents->oid), path);
+		if (!buf) {
+			ret = err(opt, _("cannot read object %s '%s'"),
+				  oid_to_hex(&contents->oid), path);
+			goto free_buf;
+		}
 		if (type != OBJ_BLOB) {
 			ret = err(opt, _("blob expected for %s '%s'"),
 				  oid_to_hex(&contents->oid), path);
@@ -945,7 +968,8 @@
 		}
 		if (S_ISREG(contents->mode)) {
 			struct strbuf strbuf = STRBUF_INIT;
-			if (convert_to_working_tree(opt->repo->index, path, buf, size, &strbuf)) {
+			if (convert_to_working_tree(opt->repo->index,
+						    path, buf, size, &strbuf)) {
 				free(buf);
 				size = strbuf.len;
 				buf = strbuf_detach(&strbuf, NULL);
@@ -998,7 +1022,7 @@
 		       const char *path)
 {
 	return update_file_flags(opt, contents, path,
-				 opt->call_depth || clean, !opt->call_depth);
+				 opt->priv->call_depth || clean, !opt->priv->call_depth);
 }
 
 /* Low level file merging, update and removal */
@@ -1020,22 +1044,22 @@
 {
 	mmfile_t orig, src1, src2;
 	struct ll_merge_options ll_opts = {0};
-	char *base_name, *name1, *name2;
+	char *base, *name1, *name2;
 	int merge_status;
 
 	ll_opts.renormalize = opt->renormalize;
 	ll_opts.extra_marker_size = extra_marker_size;
 	ll_opts.xdl_opts = opt->xdl_opts;
 
-	if (opt->call_depth) {
+	if (opt->priv->call_depth) {
 		ll_opts.virtual_ancestor = 1;
 		ll_opts.variant = 0;
 	} else {
 		switch (opt->recursive_variant) {
-		case MERGE_RECURSIVE_OURS:
+		case MERGE_VARIANT_OURS:
 			ll_opts.variant = XDL_MERGE_FAVOR_OURS;
 			break;
-		case MERGE_RECURSIVE_THEIRS:
+		case MERGE_VARIANT_THEIRS:
 			ll_opts.variant = XDL_MERGE_FAVOR_THEIRS;
 			break;
 		default:
@@ -1044,16 +1068,13 @@
 		}
 	}
 
-	assert(a->path && b->path);
-	if (strcmp(a->path, b->path) ||
-	    (opt->ancestor != NULL && strcmp(a->path, o->path) != 0)) {
-		base_name = opt->ancestor == NULL ? NULL :
-			mkpathdup("%s:%s", opt->ancestor, o->path);
+	assert(a->path && b->path && o->path && opt->ancestor);
+	if (strcmp(a->path, b->path) || strcmp(a->path, o->path) != 0) {
+		base  = mkpathdup("%s:%s", opt->ancestor, o->path);
 		name1 = mkpathdup("%s:%s", branch1, a->path);
 		name2 = mkpathdup("%s:%s", branch2, b->path);
 	} else {
-		base_name = opt->ancestor == NULL ? NULL :
-			mkpathdup("%s", opt->ancestor);
+		base  = mkpathdup("%s", opt->ancestor);
 		name1 = mkpathdup("%s", branch1);
 		name2 = mkpathdup("%s", branch2);
 	}
@@ -1062,11 +1083,11 @@
 	read_mmblob(&src1, &a->oid);
 	read_mmblob(&src2, &b->oid);
 
-	merge_status = ll_merge(result_buf, a->path, &orig, base_name,
+	merge_status = ll_merge(result_buf, a->path, &orig, base,
 				&src1, name1, &src2, name2,
 				opt->repo->index, &ll_opts);
 
-	free(base_name);
+	free(base);
 	free(name1);
 	free(name2);
 	free(orig.ptr);
@@ -1161,7 +1182,7 @@
 	struct object_array merges;
 
 	int i;
-	int search = !opt->call_depth;
+	int search = !opt->priv->call_depth;
 
 	/* store a in result in case we fail */
 	oidcpy(result, a);
@@ -1345,15 +1366,15 @@
 							&b->oid);
 		} else if (S_ISLNK(a->mode)) {
 			switch (opt->recursive_variant) {
-			case MERGE_RECURSIVE_NORMAL:
+			case MERGE_VARIANT_NORMAL:
 				oidcpy(&result->blob.oid, &a->oid);
 				if (!oid_eq(&a->oid, &b->oid))
 					result->clean = 0;
 				break;
-			case MERGE_RECURSIVE_OURS:
+			case MERGE_VARIANT_OURS:
 				oidcpy(&result->blob.oid, &a->oid);
 				break;
-			case MERGE_RECURSIVE_THEIRS:
+			case MERGE_VARIANT_THEIRS:
 				oidcpy(&result->blob.oid, &b->oid);
 				break;
 			}
@@ -1379,10 +1400,11 @@
 	const struct rename *ren = ci->ren1;
 	const struct diff_filespec *dest = ren->pair->two;
 	char *file_path = dest->path;
-	int mark_conflicted = (opt->detect_directory_renames == 1);
+	int mark_conflicted = (opt->detect_directory_renames ==
+			       MERGE_DIRECTORY_RENAMES_CONFLICT);
 	assert(ren->dir_rename_original_dest);
 
-	if (!opt->call_depth && would_lose_untracked(opt, dest->path)) {
+	if (!opt->priv->call_depth && would_lose_untracked(opt, dest->path)) {
 		mark_conflicted = 1;
 		file_path = unique_path(opt, dest->path, ren->branch);
 		output(opt, 1, _("Error: Refusing to lose untracked file at %s; "
@@ -1425,12 +1447,12 @@
 	const char *update_path = path;
 	int ret = 0;
 
-	if (dir_in_way(opt->repo->index, path, !opt->call_depth, 0) ||
-	    (!opt->call_depth && would_lose_untracked(opt, path))) {
+	if (dir_in_way(opt->repo->index, path, !opt->priv->call_depth, 0) ||
+	    (!opt->priv->call_depth && would_lose_untracked(opt, path))) {
 		update_path = alt_path = unique_path(opt, path, change_branch);
 	}
 
-	if (opt->call_depth) {
+	if (opt->priv->call_depth) {
 		/*
 		 * We cannot arbitrarily accept either a_sha or b_sha as
 		 * correct; since there is no true "middle point" between
@@ -1505,14 +1527,14 @@
 				     opt->branch2 : opt->branch1);
 
 	if (handle_change_delete(opt,
-				 opt->call_depth ? orig->path : dest->path,
-				 opt->call_depth ? NULL : orig->path,
+				 opt->priv->call_depth ? orig->path : dest->path,
+				 opt->priv->call_depth ? NULL : orig->path,
 				 orig, dest,
 				 rename_branch, delete_branch,
 				 _("rename"), _("renamed")))
 		return -1;
 
-	if (opt->call_depth)
+	if (opt->priv->call_depth)
 		return remove_file_from_index(opt->repo->index, dest->path);
 	else
 		return update_stages(opt, dest->path, NULL,
@@ -1549,7 +1571,7 @@
 	/*
 	 * In the recursive case, we just opt to undo renames
 	 */
-	if (opt->call_depth && (prev_path1 || prev_path2)) {
+	if (opt->priv->call_depth && (prev_path1 || prev_path2)) {
 		/* Put first file (a->oid, a->mode) in its original spot */
 		if (prev_path1) {
 			if (update_file(opt, 1, a, prev_path1))
@@ -1578,10 +1600,10 @@
 	/* Remove rename sources if rename/add or rename/rename(2to1) */
 	if (prev_path1)
 		remove_file(opt, 1, prev_path1,
-			    opt->call_depth || would_lose_untracked(opt, prev_path1));
+			    opt->priv->call_depth || would_lose_untracked(opt, prev_path1));
 	if (prev_path2)
 		remove_file(opt, 1, prev_path2,
-			    opt->call_depth || would_lose_untracked(opt, prev_path2));
+			    opt->priv->call_depth || would_lose_untracked(opt, prev_path2));
 
 	/*
 	 * Remove the collision path, if it wouldn't cause dirty contents
@@ -1623,12 +1645,12 @@
 	null.mode = 0;
 
 	if (merge_mode_and_contents(opt, &null, a, b, collide_path,
-				    branch1, branch2, opt->call_depth * 2, &mfi))
+				    branch1, branch2, opt->priv->call_depth * 2, &mfi))
 		return -1;
 	mfi.clean &= !alt_path;
 	if (update_file(opt, mfi.clean, &mfi.blob, update_path))
 		return -1;
-	if (!mfi.clean && !opt->call_depth &&
+	if (!mfi.clean && !opt->priv->call_depth &&
 	    update_stages(opt, collide_path, NULL, a, b))
 		return -1;
 	free(alt_path);
@@ -1668,7 +1690,7 @@
 				    &ci->ren1->src_entry->stages[other_stage],
 				    prev_path_desc,
 				    opt->branch1, opt->branch2,
-				    1 + opt->call_depth * 2, &mfi))
+				    1 + opt->priv->call_depth * 2, &mfi))
 		return -1;
 	free(prev_path_desc);
 
@@ -1686,7 +1708,7 @@
 				    const char *branch2)
 {
 	char *new_path = NULL;
-	if (dir_in_way(opt->repo->index, path, !opt->call_depth, 0)) {
+	if (dir_in_way(opt->repo->index, path, !opt->priv->call_depth, 0)) {
 		new_path = unique_path(opt, path, branch1);
 		output(opt, 1, _("%s is a directory in %s adding "
 			       "as %s instead"),
@@ -1717,17 +1739,17 @@
 	       "rename \"%s\"->\"%s\" in \"%s\"%s"),
 	       o->path, a->path, ci->ren1->branch,
 	       o->path, b->path, ci->ren2->branch,
-	       opt->call_depth ? _(" (left unresolved)") : "");
+	       opt->priv->call_depth ? _(" (left unresolved)") : "");
 
 	path_desc = xstrfmt("%s and %s, both renamed from %s",
 			    a->path, b->path, o->path);
 	if (merge_mode_and_contents(opt, o, a, b, path_desc,
 				    ci->ren1->branch, ci->ren2->branch,
-				    opt->call_depth * 2, &mfi))
+				    opt->priv->call_depth * 2, &mfi))
 		return -1;
 	free(path_desc);
 
-	if (opt->call_depth) {
+	if (opt->priv->call_depth) {
 		/*
 		 * FIXME: For rename/add-source conflicts (if we could detect
 		 * such), this is wrong.  We should instead find a unique
@@ -1842,12 +1864,12 @@
 				    &ci->ren1->src_entry->stages[ostage1],
 				    path_side_1_desc,
 				    opt->branch1, opt->branch2,
-				    1 + opt->call_depth * 2, &mfi_c1) ||
+				    1 + opt->priv->call_depth * 2, &mfi_c1) ||
 	    merge_mode_and_contents(opt, b,
 				    &ci->ren2->src_entry->stages[ostage2],
 				    c2, path_side_2_desc,
 				    opt->branch1, opt->branch2,
-				    1 + opt->call_depth * 2, &mfi_c2))
+				    1 + opt->priv->call_depth * 2, &mfi_c2))
 		return -1;
 	free(path_side_1_desc);
 	free(path_side_2_desc);
@@ -1881,17 +1903,15 @@
 	 */
 	if (opts.detect_rename > DIFF_DETECT_RENAME)
 		opts.detect_rename = DIFF_DETECT_RENAME;
-	opts.rename_limit = opt->merge_rename_limit >= 0 ? opt->merge_rename_limit :
-			    opt->diff_rename_limit >= 0 ? opt->diff_rename_limit :
-			    1000;
+	opts.rename_limit = (opt->rename_limit >= 0) ? opt->rename_limit : 1000;
 	opts.rename_score = opt->rename_score;
 	opts.show_rename_progress = opt->show_rename_progress;
 	opts.output_format = DIFF_FORMAT_NO_OUTPUT;
 	diff_setup_done(&opts);
 	diff_tree_oid(&o_tree->object.oid, &tree->object.oid, "", &opts);
 	diffcore_std(&opts);
-	if (opts.needed_rename_limit > opt->needed_rename_limit)
-		opt->needed_rename_limit = opts.needed_rename_limit;
+	if (opts.needed_rename_limit > opt->priv->needed_rename_limit)
+		opt->priv->needed_rename_limit = opts.needed_rename_limit;
 
 	ret = xmalloc(sizeof(*ret));
 	*ret = diff_queued_diff;
@@ -2002,7 +2022,7 @@
 
 	for (i = 0; i < items_to_remove->nr; i++) {
 		entry = items_to_remove->items[i].util;
-		hashmap_remove(dir_renames, entry, NULL);
+		hashmap_remove(dir_renames, &entry->ent, NULL);
 	}
 	string_list_clear(items_to_remove, 0);
 }
@@ -2125,8 +2145,8 @@
 	struct string_list remove_from_head = STRING_LIST_INIT_NODUP;
 	struct string_list remove_from_merge = STRING_LIST_INIT_NODUP;
 
-	hashmap_iter_init(dir_re_head, &iter);
-	while ((head_ent = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(dir_re_head, &iter, head_ent,
+				ent /* member name */) {
 		merge_ent = dir_rename_find_entry(dir_re_merge, head_ent->dir);
 		if (merge_ent &&
 		    !head_ent->non_unique_new_dir &&
@@ -2150,8 +2170,8 @@
 	remove_hashmap_entries(dir_re_head, &remove_from_head);
 	remove_hashmap_entries(dir_re_merge, &remove_from_merge);
 
-	hashmap_iter_init(dir_re_merge, &iter);
-	while ((merge_ent = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(dir_re_merge, &iter, merge_ent,
+				ent /* member name */) {
 		head_ent = dir_rename_find_entry(dir_re_head, merge_ent->dir);
 		if (tree_has_path(opt->repo, merge, merge_ent->dir)) {
 			/* 2. This wasn't a directory rename after all */
@@ -2230,7 +2250,7 @@
 		if (!entry) {
 			entry = xmalloc(sizeof(*entry));
 			dir_rename_entry_init(entry, old_dir);
-			hashmap_put(dir_renames, entry);
+			hashmap_put(dir_renames, &entry->ent);
 		} else {
 			free(old_dir);
 		}
@@ -2255,8 +2275,8 @@
 	 * we set non_unique_new_dir.  Once we've determined the winner (or
 	 * that there is no winner), we no longer need possible_new_dirs.
 	 */
-	hashmap_iter_init(dir_renames, &iter);
-	while ((entry = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(dir_renames, &iter, entry,
+				ent /* member name */) {
 		int max = 0;
 		int bad_max = 0;
 		char *best = NULL;
@@ -2359,8 +2379,9 @@
 		if (!collision_ent) {
 			collision_ent = xcalloc(1,
 						sizeof(struct collision_entry));
-			hashmap_entry_init(collision_ent, strhash(new_path));
-			hashmap_put(collisions, collision_ent);
+			hashmap_entry_init(&collision_ent->ent,
+						strhash(new_path));
+			hashmap_put(collisions, &collision_ent->ent);
 			collision_ent->target_file = new_path;
 		} else {
 			free(new_path);
@@ -2613,12 +2634,12 @@
 							     entries);
 	}
 
-	hashmap_iter_init(&collisions, &iter);
-	while ((e = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(&collisions, &iter, e,
+				ent /* member name */) {
 		free(e->target_file);
 		string_list_clear(&e->source_files, 0);
 	}
-	hashmap_free(&collisions, 1);
+	hashmap_free_entries(&collisions, struct collision_entry, ent);
 	return renames;
 }
 
@@ -2831,13 +2852,13 @@
 	struct hashmap_iter iter;
 	struct dir_rename_entry *e;
 
-	hashmap_iter_init(dir_renames, &iter);
-	while ((e = hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(dir_renames, &iter, e,
+				ent /* member name */) {
 		free(e->dir);
 		strbuf_release(&e->new_dir);
 		/* possible_new_dirs already cleared in get_directory_renames */
 	}
-	hashmap_free(dir_renames, 1);
+	hashmap_free_entries(dir_renames, struct dir_rename_entry, ent);
 	free(dir_renames);
 
 	free(pairs->queue);
@@ -2864,8 +2885,9 @@
 	head_pairs = get_diffpairs(opt, common, head);
 	merge_pairs = get_diffpairs(opt, common, merge);
 
-	if ((opt->detect_directory_renames == 2) ||
-	    (opt->detect_directory_renames == 1 && !opt->call_depth)) {
+	if ((opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_TRUE) ||
+	    (opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_CONFLICT &&
+	     !opt->priv->call_depth)) {
 		dir_re_head = get_directory_renames(head_pairs);
 		dir_re_merge = get_directory_renames(merge_pairs);
 
@@ -3022,13 +3044,13 @@
 		reason = _("add/add");
 
 	assert(o->path && a->path && b->path);
-	if (ci && dir_in_way(opt->repo->index, path, !opt->call_depth,
+	if (ci && dir_in_way(opt->repo->index, path, !opt->priv->call_depth,
 			     S_ISGITLINK(ci->ren1->pair->two->mode)))
 		df_conflict_remains = 1;
 
 	if (merge_mode_and_contents(opt, o, a, b, path,
 				    opt->branch1, opt->branch2,
-				    opt->call_depth * 2, mfi))
+				    opt->priv->call_depth * 2, mfi))
 		return -1;
 
 	/*
@@ -3044,7 +3066,7 @@
 
 		output(opt, 3, _("Skipped %s (merged same as existing)"), path);
 		if (add_cacheinfo(opt, &mfi->blob, path,
-				  0, (!opt->call_depth && !is_dirty), 0))
+				  0, (!opt->priv->call_depth && !is_dirty), 0))
 			return -1;
 		/*
 		 * However, add_cacheinfo() will delete the old cache entry
@@ -3052,8 +3074,8 @@
 		 * flag to avoid making the file appear as if it were
 		 * deleted by the user.
 		 */
-		pos = index_name_pos(&opt->orig_index, path, strlen(path));
-		ce = opt->orig_index.cache[pos];
+		pos = index_name_pos(&opt->priv->orig_index, path, strlen(path));
+		ce = opt->priv->orig_index.cache[pos];
 		if (ce_skip_worktree(ce)) {
 			pos = index_name_pos(opt->repo->index, path, strlen(path));
 			ce = opt->repo->index->cache[pos];
@@ -3074,7 +3096,7 @@
 
 	if (df_conflict_remains || is_dirty) {
 		char *new_path;
-		if (opt->call_depth) {
+		if (opt->priv->call_depth) {
 			remove_file_from_index(opt->repo->index, path);
 		} else {
 			if (!mfi->clean) {
@@ -3123,7 +3145,8 @@
 	clean = handle_content_merge(&mfi, opt, path, was_dirty(opt, path),
 				     o, a, b, ci);
 
-	if (clean && opt->detect_directory_renames == 1 &&
+	if (clean &&
+	    opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_CONFLICT &&
 	    ren->dir_rename_original_dest) {
 		if (update_stages(opt, path,
 				  NULL,
@@ -3168,12 +3191,12 @@
 		return clean;
 
 	/* Sanity checks */
-	assert(opt->detect_directory_renames > 0);
+	assert(opt->detect_directory_renames > MERGE_DIRECTORY_RENAMES_NONE);
 	assert(ren->dir_rename_original_type == 'A' ||
 	       ren->dir_rename_original_type == 'R');
 
 	/* Check whether to treat directory renames as a conflict */
-	clean = (opt->detect_directory_renames == 2);
+	clean = (opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_TRUE);
 
 	is_add = (ren->dir_rename_original_type == 'A');
 	if (ren->dir_rename_original_type == 'A' && clean) {
@@ -3332,7 +3355,7 @@
 			conf = _("directory/file");
 		}
 		if (dir_in_way(opt->repo->index, path,
-			       !opt->call_depth && !S_ISGITLINK(a->mode),
+			       !opt->priv->call_depth && !S_ISGITLINK(a->mode),
 			       0)) {
 			char *new_path = unique_path(opt, path, add_branch);
 			clean_merge = 0;
@@ -3341,7 +3364,7 @@
 			       conf, path, other_branch, path, new_path);
 			if (update_file(opt, 0, contents, new_path))
 				clean_merge = -1;
-			else if (opt->call_depth)
+			else if (opt->priv->call_depth)
 				remove_file_from_index(opt->repo->index, path);
 			free(new_path);
 		} else {
@@ -3381,37 +3404,32 @@
 	return clean_merge;
 }
 
-int merge_trees(struct merge_options *opt,
-		struct tree *head,
-		struct tree *merge,
-		struct tree *common,
-		struct tree **result)
+static int merge_trees_internal(struct merge_options *opt,
+				struct tree *head,
+				struct tree *merge,
+				struct tree *merge_base,
+				struct tree **result)
 {
 	struct index_state *istate = opt->repo->index;
 	int code, clean;
-	struct strbuf sb = STRBUF_INIT;
-
-	if (!opt->call_depth && repo_index_has_changes(opt->repo, head, &sb)) {
-		err(opt, _("Your local changes to the following files would be overwritten by merge:\n  %s"),
-		    sb.buf);
-		return -1;
-	}
 
 	if (opt->subtree_shift) {
-		merge = shift_tree_object(opt->repo, head, merge, opt->subtree_shift);
-		common = shift_tree_object(opt->repo, head, common, opt->subtree_shift);
+		merge = shift_tree_object(opt->repo, head, merge,
+					  opt->subtree_shift);
+		merge_base = shift_tree_object(opt->repo, head, merge_base,
+					       opt->subtree_shift);
 	}
 
-	if (oid_eq(&common->object.oid, &merge->object.oid)) {
+	if (oid_eq(&merge_base->object.oid, &merge->object.oid)) {
 		output(opt, 0, _("Already up to date!"));
 		*result = head;
 		return 1;
 	}
 
-	code = unpack_trees_start(opt, common, head, merge);
+	code = unpack_trees_start(opt, merge_base, head, merge);
 
 	if (code != 0) {
-		if (show(opt, 4) || opt->call_depth)
+		if (show(opt, 4) || opt->priv->call_depth)
 			err(opt, _("merging of trees %s and %s failed"),
 			    oid_to_hex(&head->object.oid),
 			    oid_to_hex(&merge->object.oid));
@@ -3430,12 +3448,13 @@
 		 * opposed to decaring a local hashmap is for convenience
 		 * so that we don't have to pass it to around.
 		 */
-		hashmap_init(&opt->current_file_dir_set, path_hashmap_cmp, NULL, 512);
+		hashmap_init(&opt->priv->current_file_dir_set, path_hashmap_cmp,
+			     NULL, 512);
 		get_files_dirs(opt, head);
 		get_files_dirs(opt, merge);
 
 		entries = get_unmerged(opt->repo->index);
-		clean = detect_and_process_renames(opt, common, head, merge,
+		clean = detect_and_process_renames(opt, merge_base, head, merge,
 						   entries, &re_info);
 		record_df_conflict_files(opt, entries);
 		if (clean < 0)
@@ -3466,7 +3485,8 @@
 		string_list_clear(entries, 1);
 		free(entries);
 
-		hashmap_free(&opt->current_file_dir_set, 1);
+		hashmap_free_entries(&opt->priv->current_file_dir_set,
+					struct path_hashmap_entry, e);
 
 		if (clean < 0) {
 			unpack_trees_finish(opt);
@@ -3478,7 +3498,8 @@
 
 	unpack_trees_finish(opt);
 
-	if (opt->call_depth && !(*result = write_tree_from_memory(opt)))
+	if (opt->priv->call_depth &&
+	    !(*result = write_in_core_index_as_tree(opt->repo)))
 		return -1;
 
 	return clean;
@@ -3499,16 +3520,18 @@
  * Merge the commits h1 and h2, return the resulting virtual
  * commit object and a flag indicating the cleanness of the merge.
  */
-int merge_recursive(struct merge_options *opt,
-		    struct commit *h1,
-		    struct commit *h2,
-		    struct commit_list *ca,
-		    struct commit **result)
+static int merge_recursive_internal(struct merge_options *opt,
+				    struct commit *h1,
+				    struct commit *h2,
+				    struct commit_list *merge_bases,
+				    struct commit **result)
 {
 	struct commit_list *iter;
-	struct commit *merged_common_ancestors;
-	struct tree *mrtree;
+	struct commit *merged_merge_bases;
+	struct tree *result_tree;
 	int clean;
+	const char *ancestor_name;
+	struct strbuf merge_base_abbrev = STRBUF_INIT;
 
 	if (show(opt, 4)) {
 		output(opt, 4, _("Merging:"));
@@ -3516,32 +3539,43 @@
 		output_commit_title(opt, h2);
 	}
 
-	if (!ca) {
-		ca = get_merge_bases(h1, h2);
-		ca = reverse_commit_list(ca);
+	if (!merge_bases) {
+		merge_bases = get_merge_bases(h1, h2);
+		merge_bases = reverse_commit_list(merge_bases);
 	}
 
 	if (show(opt, 5)) {
-		unsigned cnt = commit_list_count(ca);
+		unsigned cnt = commit_list_count(merge_bases);
 
 		output(opt, 5, Q_("found %u common ancestor:",
 				"found %u common ancestors:", cnt), cnt);
-		for (iter = ca; iter; iter = iter->next)
+		for (iter = merge_bases; iter; iter = iter->next)
 			output_commit_title(opt, iter->item);
 	}
 
-	merged_common_ancestors = pop_commit(&ca);
-	if (merged_common_ancestors == NULL) {
+	merged_merge_bases = pop_commit(&merge_bases);
+	if (merged_merge_bases == NULL) {
 		/* if there is no common ancestor, use an empty tree */
 		struct tree *tree;
 
 		tree = lookup_tree(opt->repo, opt->repo->hash_algo->empty_tree);
-		merged_common_ancestors = make_virtual_commit(opt->repo, tree, "ancestor");
+		merged_merge_bases = make_virtual_commit(opt->repo, tree,
+							 "ancestor");
+		ancestor_name = "empty tree";
+	} else if (opt->ancestor && !opt->priv->call_depth) {
+		ancestor_name = opt->ancestor;
+	} else if (merge_bases) {
+		ancestor_name = "merged common ancestors";
+	} else {
+		strbuf_add_unique_abbrev(&merge_base_abbrev,
+					 &merged_merge_bases->object.oid,
+					 DEFAULT_ABBREV);
+		ancestor_name = merge_base_abbrev.buf;
 	}
 
-	for (iter = ca; iter; iter = iter->next) {
+	for (iter = merge_bases; iter; iter = iter->next) {
 		const char *saved_b1, *saved_b2;
-		opt->call_depth++;
+		opt->priv->call_depth++;
 		/*
 		 * When the merge fails, the result contains files
 		 * with conflict markers. The cleanness flag is
@@ -3555,45 +3589,134 @@
 		saved_b2 = opt->branch2;
 		opt->branch1 = "Temporary merge branch 1";
 		opt->branch2 = "Temporary merge branch 2";
-		if (merge_recursive(opt, merged_common_ancestors, iter->item,
-				    NULL, &merged_common_ancestors) < 0)
+		if (merge_recursive_internal(opt, merged_merge_bases, iter->item,
+					     NULL, &merged_merge_bases) < 0)
 			return -1;
 		opt->branch1 = saved_b1;
 		opt->branch2 = saved_b2;
-		opt->call_depth--;
+		opt->priv->call_depth--;
 
-		if (!merged_common_ancestors)
+		if (!merged_merge_bases)
 			return err(opt, _("merge returned no commit"));
 	}
 
 	discard_index(opt->repo->index);
-	if (!opt->call_depth)
+	if (!opt->priv->call_depth)
 		repo_read_index(opt->repo);
 
-	opt->ancestor = "merged common ancestors";
-	clean = merge_trees(opt, get_commit_tree(h1), get_commit_tree(h2),
-			    get_commit_tree(merged_common_ancestors),
-			    &mrtree);
+	opt->ancestor = ancestor_name;
+	clean = merge_trees_internal(opt,
+				     repo_get_commit_tree(opt->repo, h1),
+				     repo_get_commit_tree(opt->repo, h2),
+				     repo_get_commit_tree(opt->repo,
+							  merged_merge_bases),
+				     &result_tree);
+	strbuf_release(&merge_base_abbrev);
+	opt->ancestor = NULL;  /* avoid accidental re-use of opt->ancestor */
 	if (clean < 0) {
 		flush_output(opt);
 		return clean;
 	}
 
-	if (opt->call_depth) {
-		*result = make_virtual_commit(opt->repo, mrtree, "merged tree");
+	if (opt->priv->call_depth) {
+		*result = make_virtual_commit(opt->repo, result_tree,
+					      "merged tree");
 		commit_list_insert(h1, &(*result)->parents);
 		commit_list_insert(h2, &(*result)->parents->next);
 	}
-	flush_output(opt);
-	if (!opt->call_depth && opt->buffer_output < 2)
-		strbuf_release(&opt->obuf);
-	if (show(opt, 2))
-		diff_warn_rename_limit("merge.renamelimit",
-				       opt->needed_rename_limit, 0);
 	return clean;
 }
 
-static struct commit *get_ref(struct repository *repo, const struct object_id *oid,
+static int merge_start(struct merge_options *opt, struct tree *head)
+{
+	struct strbuf sb = STRBUF_INIT;
+
+	/* Sanity checks on opt */
+	assert(opt->repo);
+
+	assert(opt->branch1 && opt->branch2);
+
+	assert(opt->detect_renames >= -1 &&
+	       opt->detect_renames <= DIFF_DETECT_COPY);
+	assert(opt->detect_directory_renames >= MERGE_DIRECTORY_RENAMES_NONE &&
+	       opt->detect_directory_renames <= MERGE_DIRECTORY_RENAMES_TRUE);
+	assert(opt->rename_limit >= -1);
+	assert(opt->rename_score >= 0 && opt->rename_score <= MAX_SCORE);
+	assert(opt->show_rename_progress >= 0 && opt->show_rename_progress <= 1);
+
+	assert(opt->xdl_opts >= 0);
+	assert(opt->recursive_variant >= MERGE_VARIANT_NORMAL &&
+	       opt->recursive_variant <= MERGE_VARIANT_THEIRS);
+
+	assert(opt->verbosity >= 0 && opt->verbosity <= 5);
+	assert(opt->buffer_output <= 2);
+	assert(opt->obuf.len == 0);
+
+	assert(opt->priv == NULL);
+
+	/* Sanity check on repo state; index must match head */
+	if (repo_index_has_changes(opt->repo, head, &sb)) {
+		err(opt, _("Your local changes to the following files would be overwritten by merge:\n  %s"),
+		    sb.buf);
+		strbuf_release(&sb);
+		return -1;
+	}
+
+	opt->priv = xcalloc(1, sizeof(*opt->priv));
+	string_list_init(&opt->priv->df_conflict_file_set, 1);
+	return 0;
+}
+
+static void merge_finalize(struct merge_options *opt)
+{
+	flush_output(opt);
+	if (!opt->priv->call_depth && opt->buffer_output < 2)
+		strbuf_release(&opt->obuf);
+	if (show(opt, 2))
+		diff_warn_rename_limit("merge.renamelimit",
+				       opt->priv->needed_rename_limit, 0);
+	FREE_AND_NULL(opt->priv);
+}
+
+int merge_trees(struct merge_options *opt,
+		struct tree *head,
+		struct tree *merge,
+		struct tree *merge_base)
+{
+	int clean;
+	struct tree *ignored;
+
+	assert(opt->ancestor != NULL);
+
+	if (merge_start(opt, head))
+		return -1;
+	clean = merge_trees_internal(opt, head, merge, merge_base, &ignored);
+	merge_finalize(opt);
+
+	return clean;
+}
+
+int merge_recursive(struct merge_options *opt,
+		    struct commit *h1,
+		    struct commit *h2,
+		    struct commit_list *merge_bases,
+		    struct commit **result)
+{
+	int clean;
+
+	assert(opt->ancestor == NULL ||
+	       !strcmp(opt->ancestor, "constructed merge base"));
+
+	if (merge_start(opt, repo_get_commit_tree(opt->repo, h1)))
+		return -1;
+	clean = merge_recursive_internal(opt, h1, h2, merge_bases, result);
+	merge_finalize(opt);
+
+	return clean;
+}
+
+static struct commit *get_ref(struct repository *repo,
+			      const struct object_id *oid,
 			      const char *name)
 {
 	struct object *object;
@@ -3614,8 +3737,8 @@
 int merge_recursive_generic(struct merge_options *opt,
 			    const struct object_id *head,
 			    const struct object_id *merge,
-			    int num_base_list,
-			    const struct object_id **base_list,
+			    int num_merge_bases,
+			    const struct object_id **merge_bases,
 			    struct commit **result)
 {
 	int clean;
@@ -3624,15 +3747,18 @@
 	struct commit *next_commit = get_ref(opt->repo, merge, opt->branch2);
 	struct commit_list *ca = NULL;
 
-	if (base_list) {
+	if (merge_bases) {
 		int i;
-		for (i = 0; i < num_base_list; ++i) {
+		for (i = 0; i < num_merge_bases; ++i) {
 			struct commit *base;
-			if (!(base = get_ref(opt->repo, base_list[i], oid_to_hex(base_list[i]))))
+			if (!(base = get_ref(opt->repo, merge_bases[i],
+					     oid_to_hex(merge_bases[i]))))
 				return err(opt, _("Could not parse object '%s'"),
-					   oid_to_hex(base_list[i]));
+					   oid_to_hex(merge_bases[i]));
 			commit_list_insert(base, &ca);
 		}
+		if (num_merge_bases == 1)
+			opt->ancestor = "constructed merge base";
 	}
 
 	repo_hold_locked_index(opt->repo, &lock, LOCK_DIE_ON_ERROR);
@@ -3654,22 +3780,25 @@
 {
 	char *value = NULL;
 	git_config_get_int("merge.verbosity", &opt->verbosity);
-	git_config_get_int("diff.renamelimit", &opt->diff_rename_limit);
-	git_config_get_int("merge.renamelimit", &opt->merge_rename_limit);
+	git_config_get_int("diff.renamelimit", &opt->rename_limit);
+	git_config_get_int("merge.renamelimit", &opt->rename_limit);
 	if (!git_config_get_string("diff.renames", &value)) {
-		opt->diff_detect_rename = git_config_rename("diff.renames", value);
+		opt->detect_renames = git_config_rename("diff.renames", value);
 		free(value);
 	}
 	if (!git_config_get_string("merge.renames", &value)) {
-		opt->merge_detect_rename = git_config_rename("merge.renames", value);
+		opt->detect_renames = git_config_rename("merge.renames", value);
 		free(value);
 	}
 	if (!git_config_get_string("merge.directoryrenames", &value)) {
 		int boolval = git_parse_maybe_bool(value);
 		if (0 <= boolval) {
-			opt->detect_directory_renames = boolval ? 2 : 0;
+			opt->detect_directory_renames = boolval ?
+				MERGE_DIRECTORY_RENAMES_TRUE :
+				MERGE_DIRECTORY_RENAMES_NONE;
 		} else if (!strcasecmp(value, "conflict")) {
-			opt->detect_directory_renames = 1;
+			opt->detect_directory_renames =
+				MERGE_DIRECTORY_RENAMES_CONFLICT;
 		} /* avoid erroring on values from future versions of git */
 		free(value);
 	}
@@ -3681,23 +3810,25 @@
 {
 	const char *merge_verbosity;
 	memset(opt, 0, sizeof(struct merge_options));
+
 	opt->repo = repo;
+
+	opt->detect_renames = -1;
+	opt->detect_directory_renames = MERGE_DIRECTORY_RENAMES_CONFLICT;
+	opt->rename_limit = -1;
+
 	opt->verbosity = 2;
 	opt->buffer_output = 1;
-	opt->diff_rename_limit = -1;
-	opt->merge_rename_limit = -1;
+	strbuf_init(&opt->obuf, 0);
+
 	opt->renormalize = 0;
-	opt->diff_detect_rename = -1;
-	opt->merge_detect_rename = -1;
-	opt->detect_directory_renames = 1;
+
 	merge_recursive_config(opt);
 	merge_verbosity = getenv("GIT_MERGE_VERBOSITY");
 	if (merge_verbosity)
 		opt->verbosity = strtol(merge_verbosity, NULL, 10);
 	if (opt->verbosity >= 5)
 		opt->buffer_output = 0;
-	strbuf_init(&opt->obuf, 0);
-	string_list_init(&opt->df_conflict_file_set, 1);
 }
 
 int parse_merge_opt(struct merge_options *opt, const char *s)
@@ -3707,9 +3838,9 @@
 	if (!s || !*s)
 		return -1;
 	if (!strcmp(s, "ours"))
-		opt->recursive_variant = MERGE_RECURSIVE_OURS;
+		opt->recursive_variant = MERGE_VARIANT_OURS;
 	else if (!strcmp(s, "theirs"))
-		opt->recursive_variant = MERGE_RECURSIVE_THEIRS;
+		opt->recursive_variant = MERGE_VARIANT_THEIRS;
 	else if (!strcmp(s, "subtree"))
 		opt->subtree_shift = "";
 	else if (skip_prefix(s, "subtree=", &arg))
@@ -3740,16 +3871,16 @@
 	else if (!strcmp(s, "no-renormalize"))
 		opt->renormalize = 0;
 	else if (!strcmp(s, "no-renames"))
-		opt->merge_detect_rename = 0;
+		opt->detect_renames = 0;
 	else if (!strcmp(s, "find-renames")) {
-		opt->merge_detect_rename = 1;
+		opt->detect_renames = 1;
 		opt->rename_score = 0;
 	}
 	else if (skip_prefix(s, "find-renames=", &arg) ||
 		 skip_prefix(s, "rename-threshold=", &arg)) {
 		if ((opt->rename_score = parse_rename_score(&arg)) == -1 || *arg != 0)
 			return -1;
-		opt->merge_detect_rename = 1;
+		opt->detect_renames = 1;
 	}
 	/*
 	 * Please update $__git_merge_strategy_options in
diff --git a/merge-recursive.h b/merge-recursive.h
index c2b7bb6..978847e 100644
--- a/merge-recursive.h
+++ b/merge-recursive.h
@@ -1,104 +1,124 @@
 #ifndef MERGE_RECURSIVE_H
 #define MERGE_RECURSIVE_H
 
-#include "string-list.h"
-#include "unpack-trees.h"
+#include "strbuf.h"
 
 struct commit;
-
+struct commit_list;
+struct object_id;
 struct repository;
+struct tree;
 
+struct merge_options_internal;
 struct merge_options {
+	struct repository *repo;
+
+	/* ref names used in console messages and conflict markers */
 	const char *ancestor;
 	const char *branch1;
 	const char *branch2;
+
+	/* rename related options */
+	int detect_renames;
 	enum {
-		MERGE_RECURSIVE_NORMAL = 0,
-		MERGE_RECURSIVE_OURS,
-		MERGE_RECURSIVE_THEIRS
-	} recursive_variant;
-	const char *subtree_shift;
-	unsigned buffer_output; /* 1: output at end, 2: keep buffered */
-	unsigned renormalize : 1;
-	long xdl_opts;
-	int verbosity;
-	int detect_directory_renames;
-	int diff_detect_rename;
-	int merge_detect_rename;
-	int diff_rename_limit;
-	int merge_rename_limit;
+		MERGE_DIRECTORY_RENAMES_NONE = 0,
+		MERGE_DIRECTORY_RENAMES_CONFLICT = 1,
+		MERGE_DIRECTORY_RENAMES_TRUE = 2
+	} detect_directory_renames;
+	int rename_limit;
 	int rename_score;
-	int needed_rename_limit;
 	int show_rename_progress;
-	int call_depth;
-	struct strbuf obuf;
-	struct hashmap current_file_dir_set;
-	struct string_list df_conflict_file_set;
-	struct unpack_trees_options unpack_opts;
-	struct index_state orig_index;
-	struct repository *repo;
+
+	/* xdiff-related options (patience, ignore whitespace, ours/theirs) */
+	long xdl_opts;
+	enum {
+		MERGE_VARIANT_NORMAL = 0,
+		MERGE_VARIANT_OURS,
+		MERGE_VARIANT_THEIRS
+	} recursive_variant;
+
+	/* console output related options */
+	int verbosity;
+	unsigned buffer_output; /* 1: output at end, 2: keep buffered */
+	struct strbuf obuf;     /* output buffer; if buffer_output == 2, caller
+				 * must handle and call strbuf_release */
+
+	/* miscellaneous control options */
+	const char *subtree_shift;
+	unsigned renormalize : 1;
+
+	/* internal fields used by the implementation */
+	struct merge_options_internal *priv;
 };
 
+void init_merge_options(struct merge_options *opt, struct repository *repo);
+
+/* parse the option in s and update the relevant field of opt */
+int parse_merge_opt(struct merge_options *opt, const char *s);
+
 /*
- * For dir_rename_entry, directory names are stored as a full path from the
- * toplevel of the repository and do not include a trailing '/'.  Also:
- *
- *   dir:                original name of directory being renamed
- *   non_unique_new_dir: if true, could not determine new_dir
- *   new_dir:            final name of directory being renamed
- *   possible_new_dirs:  temporary used to help determine new_dir; see comments
- *                       in get_directory_renames() for details
+ * RETURN VALUES: All the merge_* functions below return a value as follows:
+ *   > 0     Merge was clean
+ *   = 0     Merge had conflicts
+ *   < 0     Merge hit an unexpected and unrecoverable problem (e.g. disk
+ *             full) and aborted merge part-way through.
  */
-struct dir_rename_entry {
-	struct hashmap_entry ent; /* must be the first member! */
-	char *dir;
-	unsigned non_unique_new_dir:1;
-	struct strbuf new_dir;
-	struct string_list possible_new_dirs;
-};
 
-struct collision_entry {
-	struct hashmap_entry ent; /* must be the first member! */
-	char *target_file;
-	struct string_list source_files;
-	unsigned reported_already:1;
-};
-
-static inline int merge_detect_rename(struct merge_options *o)
-{
-	return o->merge_detect_rename >= 0 ? o->merge_detect_rename :
-		o->diff_detect_rename >= 0 ? o->diff_detect_rename : 1;
-}
-
-/* merge_trees() but with recursive ancestor consolidation */
-int merge_recursive(struct merge_options *o,
-		    struct commit *h1,
-		    struct commit *h2,
-		    struct commit_list *ancestors,
-		    struct commit **result);
-
-/* rename-detecting three-way merge, no recursion */
-int merge_trees(struct merge_options *o,
+/*
+ * rename-detecting three-way merge, no recursion.
+ *
+ * Outputs:
+ *   - See RETURN VALUES above
+ *   - No commit is created
+ *   - opt->repo->index has the new index
+ *   - $GIT_INDEX_FILE is not updated
+ *   - The working tree is updated with results of the merge
+ */
+int merge_trees(struct merge_options *opt,
 		struct tree *head,
 		struct tree *merge,
-		struct tree *common,
-		struct tree **result);
+		struct tree *merge_base);
 
 /*
- * "git-merge-recursive" can be fed trees; wrap them into
- * virtual commits and call merge_recursive() proper.
+ * merge_recursive is like merge_trees() but with recursive ancestor
+ * consolidation and, if the commit is clean, creation of a commit.
+ *
+ * NOTE: empirically, about a decade ago it was determined that with more
+ *       than two merge bases, optimal behavior was found when the
+ *       merge_bases were passed in the order of oldest commit to newest
+ *       commit.  Also, merge_bases will be consumed (emptied) so make a
+ *       copy if you need it.
+ *
+ * Outputs:
+ *   - See RETURN VALUES above
+ *   - If merge is clean, a commit is created and its address written to *result
+ *   - opt->repo->index has the new index
+ *   - $GIT_INDEX_FILE is not updated
+ *   - The working tree is updated with results of the merge
  */
-int merge_recursive_generic(struct merge_options *o,
+int merge_recursive(struct merge_options *opt,
+		    struct commit *h1,
+		    struct commit *h2,
+		    struct commit_list *merge_bases,
+		    struct commit **result);
+
+/*
+ * merge_recursive_generic can operate on trees instead of commits, by
+ * wrapping the trees into virtual commits, and calling merge_recursive().
+ * It also writes out the in-memory index to disk if the merge is successful.
+ *
+ * Outputs:
+ *   - See RETURN VALUES above
+ *   - If merge is clean, a commit is created and its address written to *result
+ *   - opt->repo->index has the new index
+ *   - $GIT_INDEX_FILE is updated
+ *   - The working tree is updated with results of the merge
+ */
+int merge_recursive_generic(struct merge_options *opt,
 			    const struct object_id *head,
 			    const struct object_id *merge,
-			    int num_ca,
-			    const struct object_id **ca,
+			    int num_merge_bases,
+			    const struct object_id **merge_bases,
 			    struct commit **result);
 
-void init_merge_options(struct merge_options *o,
-			struct repository *repo);
-struct tree *write_tree_from_memory(struct merge_options *o);
-
-int parse_merge_opt(struct merge_options *out, const char *s);
-
 #endif
diff --git a/midx.c b/midx.c
index d649644..f29afc0 100644
--- a/midx.c
+++ b/midx.c
@@ -19,8 +19,7 @@
 #define MIDX_BYTE_NUM_PACKS 8
 #define MIDX_HASH_VERSION 1
 #define MIDX_HEADER_SIZE 12
-#define MIDX_HASH_LEN 20
-#define MIDX_MIN_SIZE (MIDX_HEADER_SIZE + MIDX_HASH_LEN)
+#define MIDX_MIN_SIZE (MIDX_HEADER_SIZE + the_hash_algo->rawsz)
 
 #define MIDX_MAX_CHUNKS 5
 #define MIDX_CHUNK_ALIGNMENT 4
@@ -93,7 +92,7 @@
 	hash_version = m->data[MIDX_BYTE_HASH_VERSION];
 	if (hash_version != MIDX_HASH_VERSION)
 		die(_("hash version %u does not match"), hash_version);
-	m->hash_len = MIDX_HASH_LEN;
+	m->hash_len = the_hash_algo->rawsz;
 
 	m->num_chunks = m->data[MIDX_BYTE_NUM_CHUNKS];
 
@@ -234,7 +233,7 @@
 int bsearch_midx(const struct object_id *oid, struct multi_pack_index *m, uint32_t *result)
 {
 	return bsearch_hash(oid->hash, m->chunk_oid_fanout, m->chunk_oid_lookup,
-			    MIDX_HASH_LEN, result);
+			    the_hash_algo->rawsz, result);
 }
 
 struct object_id *nth_midxed_object_oid(struct object_id *oid,
@@ -928,7 +927,7 @@
 
 	cur_chunk++;
 	chunk_ids[cur_chunk] = MIDX_CHUNKID_OBJECTOFFSETS;
-	chunk_offsets[cur_chunk] = chunk_offsets[cur_chunk - 1] + nr_entries * MIDX_HASH_LEN;
+	chunk_offsets[cur_chunk] = chunk_offsets[cur_chunk - 1] + nr_entries * the_hash_algo->rawsz;
 
 	cur_chunk++;
 	chunk_offsets[cur_chunk] = chunk_offsets[cur_chunk - 1] + nr_entries * MIDX_CHUNK_OFFSET_WIDTH;
@@ -976,7 +975,7 @@
 				break;
 
 			case MIDX_CHUNKID_OIDLOOKUP:
-				written += write_midx_oid_lookup(f, MIDX_HASH_LEN, entries, nr_entries);
+				written += write_midx_oid_lookup(f, the_hash_algo->rawsz, entries, nr_entries);
 				break;
 
 			case MIDX_CHUNKID_OBJECTOFFSETS:
diff --git a/name-hash.c b/name-hash.c
index 6959086..ceb1d7b 100644
--- a/name-hash.c
+++ b/name-hash.c
@@ -17,14 +17,16 @@
 };
 
 static int dir_entry_cmp(const void *unused_cmp_data,
-			 const void *entry,
-			 const void *entry_or_key,
+			 const struct hashmap_entry *eptr,
+			 const struct hashmap_entry *entry_or_key,
 			 const void *keydata)
 {
-	const struct dir_entry *e1 = entry;
-	const struct dir_entry *e2 = entry_or_key;
+	const struct dir_entry *e1, *e2;
 	const char *name = keydata;
 
+	e1 = container_of(eptr, const struct dir_entry, ent);
+	e2 = container_of(entry_or_key, const struct dir_entry, ent);
+
 	return e1->namelen != e2->namelen || strncasecmp(e1->name,
 			name ? name : e2->name, e1->namelen);
 }
@@ -33,9 +35,9 @@
 		const char *name, unsigned int namelen, unsigned int hash)
 {
 	struct dir_entry key;
-	hashmap_entry_init(&key, hash);
+	hashmap_entry_init(&key.ent, hash);
 	key.namelen = namelen;
-	return hashmap_get(&istate->dir_hash, &key, name);
+	return hashmap_get_entry(&istate->dir_hash, &key, ent, name);
 }
 
 static struct dir_entry *find_dir_entry(struct index_state *istate,
@@ -68,9 +70,9 @@
 	if (!dir) {
 		/* not found, create it and add to hash table */
 		FLEX_ALLOC_MEM(dir, name, ce->name, namelen);
-		hashmap_entry_init(dir, memihash(ce->name, namelen));
+		hashmap_entry_init(&dir->ent, memihash(ce->name, namelen));
 		dir->namelen = namelen;
-		hashmap_add(&istate->dir_hash, dir);
+		hashmap_add(&istate->dir_hash, &dir->ent);
 
 		/* recursively add missing parent directories */
 		dir->parent = hash_dir_entry(istate, ce, namelen);
@@ -95,7 +97,7 @@
 	struct dir_entry *dir = hash_dir_entry(istate, ce, ce_namelen(ce));
 	while (dir && !(--dir->nr)) {
 		struct dir_entry *parent = dir->parent;
-		hashmap_remove(&istate->dir_hash, dir, NULL);
+		hashmap_remove(&istate->dir_hash, &dir->ent, NULL);
 		free(dir);
 		dir = parent;
 	}
@@ -106,20 +108,23 @@
 	if (ce->ce_flags & CE_HASHED)
 		return;
 	ce->ce_flags |= CE_HASHED;
-	hashmap_entry_init(ce, memihash(ce->name, ce_namelen(ce)));
-	hashmap_add(&istate->name_hash, ce);
+	hashmap_entry_init(&ce->ent, memihash(ce->name, ce_namelen(ce)));
+	hashmap_add(&istate->name_hash, &ce->ent);
 
 	if (ignore_case)
 		add_dir_entry(istate, ce);
 }
 
 static int cache_entry_cmp(const void *unused_cmp_data,
-			   const void *entry,
-			   const void *entry_or_key,
+			   const struct hashmap_entry *eptr,
+			   const struct hashmap_entry *entry_or_key,
 			   const void *remove)
 {
-	const struct cache_entry *ce1 = entry;
-	const struct cache_entry *ce2 = entry_or_key;
+	const struct cache_entry *ce1, *ce2;
+
+	ce1 = container_of(eptr, const struct cache_entry, ent);
+	ce2 = container_of(entry_or_key, const struct cache_entry, ent);
+
 	/*
 	 * For remove_name_hash, find the exact entry (pointer equality); for
 	 * index_file_exists, find all entries with matching hash code and
@@ -280,10 +285,10 @@
 	dir = find_dir_entry__hash(istate, prefix->buf, prefix->len, hash);
 	if (!dir) {
 		FLEX_ALLOC_MEM(dir, name, prefix->buf, prefix->len);
-		hashmap_entry_init(dir, hash);
+		hashmap_entry_init(&dir->ent, hash);
 		dir->namelen = prefix->len;
 		dir->parent = parent;
-		hashmap_add(&istate->dir_hash, dir);
+		hashmap_add(&istate->dir_hash, &dir->ent);
 
 		if (parent) {
 			unlock_dir_mutex(lock_nr);
@@ -472,8 +477,8 @@
 	for (k = 0; k < d->istate->cache_nr; k++) {
 		struct cache_entry *ce_k = d->istate->cache[k];
 		ce_k->ce_flags |= CE_HASHED;
-		hashmap_entry_init(ce_k, d->lazy_entries[k].hash_name);
-		hashmap_add(&d->istate->name_hash, ce_k);
+		hashmap_entry_init(&ce_k->ent, d->lazy_entries[k].hash_name);
+		hashmap_add(&d->istate->name_hash, &ce_k->ent);
 	}
 
 	return NULL;
@@ -625,7 +630,7 @@
 	if (!istate->name_hash_initialized || !(ce->ce_flags & CE_HASHED))
 		return;
 	ce->ce_flags &= ~CE_HASHED;
-	hashmap_remove(&istate->name_hash, ce, ce);
+	hashmap_remove(&istate->name_hash, &ce->ent, ce);
 
 	if (ignore_case)
 		remove_dir_entry(istate, ce);
@@ -702,15 +707,15 @@
 struct cache_entry *index_file_exists(struct index_state *istate, const char *name, int namelen, int icase)
 {
 	struct cache_entry *ce;
+	unsigned int hash = memihash(name, namelen);
 
 	lazy_init_name_hash(istate);
 
-	ce = hashmap_get_from_hash(&istate->name_hash,
-				   memihash(name, namelen), NULL);
-	while (ce) {
+	ce = hashmap_get_entry_from_hash(&istate->name_hash, hash, NULL,
+					 struct cache_entry, ent);
+	hashmap_for_each_entry_from(&istate->name_hash, ce, ent) {
 		if (same_name(ce, name, namelen, icase))
 			return ce;
-		ce = hashmap_get_next(&istate->name_hash, ce);
 	}
 	return NULL;
 }
@@ -721,6 +726,6 @@
 		return;
 	istate->name_hash_initialized = 0;
 
-	hashmap_free(&istate->name_hash, 0);
-	hashmap_free(&istate->dir_hash, 1);
+	hashmap_free(&istate->name_hash);
+	hashmap_free_entries(&istate->dir_hash, struct dir_entry, ent);
 }
diff --git a/object.c b/object.c
index 07bdd5b..3b8b8c5 100644
--- a/object.c
+++ b/object.c
@@ -7,7 +7,6 @@
 #include "commit.h"
 #include "tag.h"
 #include "alloc.h"
-#include "object-store.h"
 #include "packfile.h"
 #include "commit-graph.h"
 
diff --git a/oidmap.c b/oidmap.c
index 6d6e840..423aa01 100644
--- a/oidmap.c
+++ b/oidmap.c
@@ -2,14 +2,18 @@
 #include "oidmap.h"
 
 static int oidmap_neq(const void *hashmap_cmp_fn_data,
-		      const void *entry, const void *entry_or_key,
+		      const struct hashmap_entry *e1,
+		      const struct hashmap_entry *e2,
 		      const void *keydata)
 {
-	const struct oidmap_entry *entry_ = entry;
+	const struct oidmap_entry *a, *b;
+
+	a = container_of(e1, const struct oidmap_entry, internal_entry);
+	b = container_of(e2, const struct oidmap_entry, internal_entry);
+
 	if (keydata)
-		return !oideq(&entry_->oid, (const struct object_id *) keydata);
-	return !oideq(&entry_->oid,
-		      &((const struct oidmap_entry *) entry_or_key)->oid);
+		return !oideq(&a->oid, (const struct object_id *) keydata);
+	return !oideq(&a->oid, &b->oid);
 }
 
 void oidmap_init(struct oidmap *map, size_t initial_size)
@@ -21,7 +25,9 @@
 {
 	if (!map)
 		return;
-	hashmap_free(&map->map, free_entries);
+
+	/* TODO: make oidmap itself not depend on struct layouts */
+	hashmap_free_(&map->map, free_entries ? 0 : -1);
 }
 
 void *oidmap_get(const struct oidmap *map, const struct object_id *key)
@@ -51,5 +57,5 @@
 		oidmap_init(map, 0);
 
 	hashmap_entry_init(&to_put->internal_entry, oidhash(&to_put->oid));
-	return hashmap_put(&map->map, to_put);
+	return hashmap_put(&map->map, &to_put->internal_entry);
 }
diff --git a/oidmap.h b/oidmap.h
index 7a93946..c66a83a 100644
--- a/oidmap.h
+++ b/oidmap.h
@@ -78,14 +78,16 @@
 
 static inline void *oidmap_iter_next(struct oidmap_iter *iter)
 {
-	return hashmap_iter_next(&iter->h_iter);
+	/* TODO: this API could be reworked to do compile-time type checks */
+	return (void *)hashmap_iter_next(&iter->h_iter);
 }
 
 static inline void *oidmap_iter_first(struct oidmap *map,
 				      struct oidmap_iter *iter)
 {
 	oidmap_iter_init(map, iter);
-	return oidmap_iter_next(iter);
+	/* TODO: this API could be reworked to do compile-time type checks */
+	return (void *)oidmap_iter_next(iter);
 }
 
 #endif
diff --git a/pack-write.c b/pack-write.c
index 29d17a9..f0017be 100644
--- a/pack-write.c
+++ b/pack-write.c
@@ -349,7 +349,7 @@
 			 struct pack_idx_entry **written_list,
 			 uint32_t nr_written,
 			 struct pack_idx_option *pack_idx_opts,
-			 unsigned char sha1[])
+			 unsigned char hash[])
 {
 	const char *idx_tmp_name;
 	int basename_len = name_buffer->len;
@@ -358,18 +358,18 @@
 		die_errno("unable to make temporary pack file readable");
 
 	idx_tmp_name = write_idx_file(NULL, written_list, nr_written,
-				      pack_idx_opts, sha1);
+				      pack_idx_opts, hash);
 	if (adjust_shared_perm(idx_tmp_name))
 		die_errno("unable to make temporary index file readable");
 
-	strbuf_addf(name_buffer, "%s.pack", sha1_to_hex(sha1));
+	strbuf_addf(name_buffer, "%s.pack", hash_to_hex(hash));
 
 	if (rename(pack_tmp_name, name_buffer->buf))
 		die_errno("unable to rename temporary pack file");
 
 	strbuf_setlen(name_buffer, basename_len);
 
-	strbuf_addf(name_buffer, "%s.idx", sha1_to_hex(sha1));
+	strbuf_addf(name_buffer, "%s.idx", hash_to_hex(hash));
 	if (rename(idx_tmp_name, name_buffer->buf))
 		die_errno("unable to rename temporary index file");
 
diff --git a/packfile.c b/packfile.c
index f3f962a..355066d 100644
--- a/packfile.c
+++ b/packfile.c
@@ -6,7 +6,6 @@
 #include "mergesort.h"
 #include "packfile.h"
 #include "delta.h"
-#include "list.h"
 #include "streaming.h"
 #include "sha1-lookup.h"
 #include "commit.h"
@@ -20,12 +19,12 @@
 #include "promisor-remote.h"
 
 char *odb_pack_name(struct strbuf *buf,
-		    const unsigned char *sha1,
+		    const unsigned char *hash,
 		    const char *ext)
 {
 	strbuf_reset(buf);
 	strbuf_addf(buf, "%s/pack/pack-%s.%s", get_object_directory(),
-		    sha1_to_hex(sha1), ext);
+		    hash_to_hex(hash), ext);
 	return buf->buf;
 }
 
@@ -1344,7 +1343,7 @@
 };
 
 struct delta_base_cache_entry {
-	struct hashmap hash;
+	struct hashmap_entry ent;
 	struct delta_base_cache_key key;
 	struct list_head lru;
 	void *data;
@@ -1364,7 +1363,7 @@
 static struct delta_base_cache_entry *
 get_delta_base_cache_entry(struct packed_git *p, off_t base_offset)
 {
-	struct hashmap_entry entry;
+	struct hashmap_entry entry, *e;
 	struct delta_base_cache_key key;
 
 	if (!delta_base_cache.cmpfn)
@@ -1373,7 +1372,8 @@
 	hashmap_entry_init(&entry, pack_entry_hash(p, base_offset));
 	key.p = p;
 	key.base_offset = base_offset;
-	return hashmap_get(&delta_base_cache, &entry, &key);
+	e = hashmap_get(&delta_base_cache, &entry, &key);
+	return e ? container_of(e, struct delta_base_cache_entry, ent) : NULL;
 }
 
 static int delta_base_cache_key_eq(const struct delta_base_cache_key *a,
@@ -1383,11 +1383,16 @@
 }
 
 static int delta_base_cache_hash_cmp(const void *unused_cmp_data,
-				     const void *va, const void *vb,
+				     const struct hashmap_entry *va,
+				     const struct hashmap_entry *vb,
 				     const void *vkey)
 {
-	const struct delta_base_cache_entry *a = va, *b = vb;
+	const struct delta_base_cache_entry *a, *b;
 	const struct delta_base_cache_key *key = vkey;
+
+	a = container_of(va, const struct delta_base_cache_entry, ent);
+	b = container_of(vb, const struct delta_base_cache_entry, ent);
+
 	if (key)
 		return !delta_base_cache_key_eq(&a->key, key);
 	else
@@ -1406,7 +1411,7 @@
  */
 static void detach_delta_base_cache_entry(struct delta_base_cache_entry *ent)
 {
-	hashmap_remove(&delta_base_cache, ent, &ent->key);
+	hashmap_remove(&delta_base_cache, &ent->ent, &ent->key);
 	list_del(&ent->lru);
 	delta_base_cached -= ent->size;
 	free(ent);
@@ -1470,8 +1475,8 @@
 
 	if (!delta_base_cache.cmpfn)
 		hashmap_init(&delta_base_cache, delta_base_cache_hash_cmp, NULL, 0);
-	hashmap_entry_init(ent, pack_entry_hash(p, base_offset));
-	hashmap_add(&delta_base_cache, ent);
+	hashmap_entry_init(&ent->ent, pack_entry_hash(p, base_offset));
+	hashmap_add(&delta_base_cache, &ent->ent);
 }
 
 int packed_object_info(struct repository *r, struct packed_git *p,
diff --git a/patch-ids.c b/patch-ids.c
index e8c150d..12aa6d4 100644
--- a/patch-ids.c
+++ b/patch-ids.c
@@ -36,14 +36,16 @@
  * any significance; only that it is non-zero matters.
  */
 static int patch_id_neq(const void *cmpfn_data,
-			const void *entry,
-			const void *entry_or_key,
+			const struct hashmap_entry *eptr,
+			const struct hashmap_entry *entry_or_key,
 			const void *unused_keydata)
 {
 	/* NEEDSWORK: const correctness? */
 	struct diff_options *opt = (void *)cmpfn_data;
-	struct patch_id *a = (void *)entry;
-	struct patch_id *b = (void *)entry_or_key;
+	struct patch_id *a, *b;
+
+	a = container_of(eptr, struct patch_id, ent);
+	b = container_of(entry_or_key, struct patch_id, ent);
 
 	if (is_null_oid(&a->patch_id) &&
 	    commit_patch_id(a->commit, opt, &a->patch_id, 0, 0))
@@ -69,7 +71,7 @@
 
 int free_patch_ids(struct patch_ids *ids)
 {
-	hashmap_free(&ids->patches, 1);
+	hashmap_free_entries(&ids->patches, struct patch_id, ent);
 	return 0;
 }
 
@@ -83,7 +85,7 @@
 	if (commit_patch_id(commit, &ids->diffopts, &header_only_patch_id, 1, 0))
 		return -1;
 
-	hashmap_entry_init(patch, oidhash(&header_only_patch_id));
+	hashmap_entry_init(&patch->ent, oidhash(&header_only_patch_id));
 	return 0;
 }
 
@@ -99,7 +101,7 @@
 	if (init_patch_id_entry(&patch, commit, ids))
 		return NULL;
 
-	return hashmap_get(&ids->patches, &patch, NULL);
+	return hashmap_get_entry(&ids->patches, &patch, ent, NULL);
 }
 
 struct patch_id *add_commit_patch_id(struct commit *commit,
@@ -116,6 +118,6 @@
 		return NULL;
 	}
 
-	hashmap_add(&ids->patches, key);
+	hashmap_add(&ids->patches, &key->ent);
 	return key;
 }
diff --git a/perl/Git/SVN.pm b/perl/Git/SVN.pm
index 76b2965..4b28b87 100644
--- a/perl/Git/SVN.pm
+++ b/perl/Git/SVN.pm
@@ -1491,6 +1491,10 @@
 
 sub check_author {
 	my ($author) = @_;
+	if (defined $author) {
+		$author =~ s/^\s+//g;
+		$author =~ s/\s+$//g;
+	}
 	if (!defined $author || length $author == 0) {
 		$author = '(no author)';
 	}
diff --git a/quote.c b/quote.c
index c8ba6b3..24a58ba 100644
--- a/quote.c
+++ b/quote.c
@@ -48,6 +48,12 @@
 	static const char ok_punct[] = "+,-./:=@_^";
 	const char *p;
 
+	/* Avoid losing a zero-length string by adding '' */
+	if (!*src) {
+		strbuf_addstr(dst, "''");
+		return;
+	}
+
 	for (p = src; *p; p++) {
 		if (!isalpha(*p) && !isdigit(*p) && !strchr(ok_punct, *p)) {
 			sq_quote_buf(dst, src);
diff --git a/range-diff.c b/range-diff.c
index ba1e9a4..7fed5a3 100644
--- a/range-diff.c
+++ b/range-diff.c
@@ -52,6 +52,7 @@
 
 	argv_array_pushl(&cp.args, "log", "--no-color", "-p", "--no-merges",
 			"--reverse", "--date-order", "--decorate=no",
+			"--no-prefix",
 			/*
 			 * Choose indicators that are not used anywhere
 			 * else in diffs, but still look reasonable
@@ -111,7 +112,7 @@
 			if (!util->diff_offset)
 				util->diff_offset = buf.len;
 			line[len - 1] = '\n';
-			len = parse_git_diff_header(&root, &linenr, 1, line,
+			len = parse_git_diff_header(&root, &linenr, 0, line,
 						    len, size, &patch);
 			if (len < 0)
 				die(_("could not parse git header '%.*s'"), (int)len, line);
@@ -217,8 +218,8 @@
 		util->i = i;
 		util->patch = a->items[i].string;
 		util->diff = util->patch + util->diff_offset;
-		hashmap_entry_init(util, strhash(util->diff));
-		hashmap_add(&map, util);
+		hashmap_entry_init(&util->e, strhash(util->diff));
+		hashmap_add(&map, &util->e);
 	}
 
 	/* Now try to find exact matches in b */
@@ -228,8 +229,8 @@
 		util->i = i;
 		util->patch = b->items[i].string;
 		util->diff = util->patch + util->diff_offset;
-		hashmap_entry_init(util, strhash(util->diff));
-		other = hashmap_remove(&map, util, NULL);
+		hashmap_entry_init(&util->e, strhash(util->diff));
+		other = hashmap_remove_entry(&map, util, e, NULL);
 		if (other) {
 			if (other->matching >= 0)
 				BUG("already assigned!");
@@ -239,7 +240,7 @@
 		}
 	}
 
-	hashmap_free(&map, 0);
+	hashmap_free(&map);
 }
 
 static void diffsize_consume(void *data, char *line, unsigned long len)
diff --git a/read-cache.c b/read-cache.c
index 2c14094..133f790 100644
--- a/read-cache.c
+++ b/read-cache.c
@@ -1276,7 +1276,7 @@
 	 */
 	if (istate->cache_nr > 0 &&
 		strcmp(ce->name, istate->cache[istate->cache_nr - 1]->name) > 0)
-		pos = -istate->cache_nr - 1;
+		pos = index_pos_to_insert_pos(istate->cache_nr);
 	else
 		pos = index_name_stage_pos(istate, ce->name, ce_namelen(ce), ce_stage(ce));
 
@@ -1915,7 +1915,7 @@
 	/*
 	 * Account for potential alignment differences.
 	 */
-	per_entry += align_padding_size(sizeof(struct cache_entry), -sizeof(struct ondisk_cache_entry));
+	per_entry += align_padding_size(per_entry, 0);
 	return ondisk_size + entries * per_entry;
 }
 
diff --git a/ref-filter.c b/ref-filter.c
index 220e9bd..6867e33 100644
--- a/ref-filter.c
+++ b/ref-filter.c
@@ -79,17 +79,20 @@
 } oi, oi_deref;
 
 struct ref_to_worktree_entry {
-	struct hashmap_entry ent; /* must be the first member! */
+	struct hashmap_entry ent;
 	struct worktree *wt; /* key is wt->head_ref */
 };
 
 static int ref_to_worktree_map_cmpfnc(const void *unused_lookupdata,
-				      const void *existing_hashmap_entry_to_test,
-				      const void *key,
+				      const struct hashmap_entry *eptr,
+				      const struct hashmap_entry *kptr,
 				      const void *keydata_aka_refname)
 {
-	const struct ref_to_worktree_entry *e = existing_hashmap_entry_to_test;
-	const struct ref_to_worktree_entry *k = key;
+	const struct ref_to_worktree_entry *e, *k;
+
+	e = container_of(eptr, const struct ref_to_worktree_entry, ent);
+	k = container_of(kptr, const struct ref_to_worktree_entry, ent);
+
 	return strcmp(e->wt->head_ref,
 		keydata_aka_refname ? keydata_aka_refname : k->wt->head_ref);
 }
@@ -1565,9 +1568,10 @@
 			struct ref_to_worktree_entry *entry;
 			entry = xmalloc(sizeof(*entry));
 			entry->wt = worktrees[i];
-			hashmap_entry_init(entry, strhash(worktrees[i]->head_ref));
+			hashmap_entry_init(&entry->ent,
+					strhash(worktrees[i]->head_ref));
 
-			hashmap_add(map, entry);
+			hashmap_add(map, &entry->ent);
 		}
 	}
 }
@@ -1584,18 +1588,20 @@
 
 static char *get_worktree_path(const struct used_atom *atom, const struct ref_array_item *ref)
 {
-	struct hashmap_entry entry;
+	struct hashmap_entry entry, *e;
 	struct ref_to_worktree_entry *lookup_result;
 
 	lazy_init_worktree_map();
 
 	hashmap_entry_init(&entry, strhash(ref->refname));
-	lookup_result = hashmap_get(&(ref_to_worktree_map.map), &entry, ref->refname);
+	e = hashmap_get(&(ref_to_worktree_map.map), &entry, ref->refname);
 
-	if (lookup_result)
-		return xstrdup(lookup_result->wt->path);
-	else
+	if (!e)
 		return xstrdup("");
+
+	lookup_result = container_of(e, struct ref_to_worktree_entry, ent);
+
+	return xstrdup(lookup_result->wt->path);
 }
 
 /*
@@ -2166,7 +2172,8 @@
 	used_atom_cnt = 0;
 
 	if (ref_to_worktree_map.worktrees) {
-		hashmap_free(&(ref_to_worktree_map.map), 1);
+		hashmap_free_entries(&(ref_to_worktree_map.map),
+					struct ref_to_worktree_entry, ent);
 		free_worktrees(ref_to_worktree_map.worktrees);
 		ref_to_worktree_map.worktrees = NULL;
 	}
diff --git a/refs.c b/refs.c
index cd297ee..1ab0bb5 100644
--- a/refs.c
+++ b/refs.c
@@ -1772,7 +1772,7 @@
 
 struct ref_store_hash_entry
 {
-	struct hashmap_entry ent; /* must be the first member! */
+	struct hashmap_entry ent;
 
 	struct ref_store *refs;
 
@@ -1781,11 +1781,16 @@
 };
 
 static int ref_store_hash_cmp(const void *unused_cmp_data,
-			      const void *entry, const void *entry_or_key,
+			      const struct hashmap_entry *eptr,
+			      const struct hashmap_entry *entry_or_key,
 			      const void *keydata)
 {
-	const struct ref_store_hash_entry *e1 = entry, *e2 = entry_or_key;
-	const char *name = keydata ? keydata : e2->name;
+	const struct ref_store_hash_entry *e1, *e2;
+	const char *name;
+
+	e1 = container_of(eptr, const struct ref_store_hash_entry, ent);
+	e2 = container_of(entry_or_key, const struct ref_store_hash_entry, ent);
+	name = keydata ? keydata : e2->name;
 
 	return strcmp(e1->name, name);
 }
@@ -1796,7 +1801,7 @@
 	struct ref_store_hash_entry *entry;
 
 	FLEX_ALLOC_STR(entry, name, name);
-	hashmap_entry_init(entry, strhash(name));
+	hashmap_entry_init(&entry->ent, strhash(name));
 	entry->refs = refs;
 	return entry;
 }
@@ -1815,12 +1820,15 @@
 					      const char *name)
 {
 	struct ref_store_hash_entry *entry;
+	unsigned int hash;
 
 	if (!map->tablesize)
 		/* It's initialized on demand in register_ref_store(). */
 		return NULL;
 
-	entry = hashmap_get_from_hash(map, strhash(name), name);
+	hash = strhash(name);
+	entry = hashmap_get_entry_from_hash(map, hash, name,
+					struct ref_store_hash_entry, ent);
 	return entry ? entry->refs : NULL;
 }
 
@@ -1863,10 +1871,13 @@
 				   struct ref_store *refs,
 				   const char *name)
 {
+	struct ref_store_hash_entry *entry;
+
 	if (!map->tablesize)
 		hashmap_init(map, ref_store_hash_cmp, NULL, 0);
 
-	if (hashmap_put(map, alloc_ref_store_hash_entry(name, refs)))
+	entry = alloc_ref_store_hash_entry(name, refs);
+	if (hashmap_put(map, &entry->ent))
 		BUG("%s ref_store '%s' initialized twice", type, name);
 }
 
diff --git a/remote.c b/remote.c
index e50f760..5c4666b 100644
--- a/remote.c
+++ b/remote.c
@@ -111,14 +111,16 @@
 };
 
 static int remotes_hash_cmp(const void *unused_cmp_data,
-			    const void *entry,
-			    const void *entry_or_key,
+			    const struct hashmap_entry *eptr,
+			    const struct hashmap_entry *entry_or_key,
 			    const void *keydata)
 {
-	const struct remote *a = entry;
-	const struct remote *b = entry_or_key;
+	const struct remote *a, *b;
 	const struct remotes_hash_key *key = keydata;
 
+	a = container_of(eptr, const struct remote, ent);
+	b = container_of(entry_or_key, const struct remote, ent);
+
 	if (key)
 		return strncmp(a->name, key->str, key->len) || a->name[key->len];
 	else
@@ -135,7 +137,7 @@
 {
 	struct remote *ret, *replaced;
 	struct remotes_hash_key lookup;
-	struct hashmap_entry lookup_entry;
+	struct hashmap_entry lookup_entry, *e;
 
 	if (!len)
 		len = strlen(name);
@@ -145,8 +147,9 @@
 	lookup.len = len;
 	hashmap_entry_init(&lookup_entry, memhash(name, len));
 
-	if ((ret = hashmap_get(&remotes_hash, &lookup_entry, &lookup)) != NULL)
-		return ret;
+	e = hashmap_get(&remotes_hash, &lookup_entry, &lookup);
+	if (e)
+		return container_of(e, struct remote, ent);
 
 	ret = xcalloc(1, sizeof(struct remote));
 	ret->prune = -1;  /* unspecified */
@@ -158,8 +161,8 @@
 	ALLOC_GROW(remotes, remotes_nr + 1, remotes_alloc);
 	remotes[remotes_nr++] = ret;
 
-	hashmap_entry_init(ret, lookup_entry.hash);
-	replaced = hashmap_put(&remotes_hash, ret);
+	hashmap_entry_init(&ret->ent, lookup_entry.hash);
+	replaced = hashmap_put_entry(&remotes_hash, ret, ent);
 	assert(replaced == NULL);  /* no previous entry overwritten */
 	return ret;
 }
diff --git a/remote.h b/remote.h
index 83e8856..0e1d2b2 100644
--- a/remote.h
+++ b/remote.h
@@ -14,7 +14,7 @@
 };
 
 struct remote {
-	struct hashmap_entry ent;  /* must be first */
+	struct hashmap_entry ent;
 
 	const char *name;
 	int origin, configured_in_repo;
diff --git a/rerere.c b/rerere.c
index 17abb47..3e51fdf 100644
--- a/rerere.c
+++ b/rerere.c
@@ -52,7 +52,7 @@
 
 static const char *rerere_id_hex(const struct rerere_id *id)
 {
-	return sha1_to_hex(id->collection->hash);
+	return hash_to_hex(id->collection->hash);
 }
 
 static void fit_variant(struct rerere_dir *rr_dir, int variant)
@@ -115,7 +115,7 @@
 static void scan_rerere_dir(struct rerere_dir *rr_dir)
 {
 	struct dirent *de;
-	DIR *dir = opendir(git_path("rr-cache/%s", sha1_to_hex(rr_dir->hash)));
+	DIR *dir = opendir(git_path("rr-cache/%s", hash_to_hex(rr_dir->hash)));
 
 	if (!dir)
 		return;
@@ -186,9 +186,9 @@
 	return id;
 }
 
-static struct rerere_id *new_rerere_id(unsigned char *sha1)
+static struct rerere_id *new_rerere_id(unsigned char *hash)
 {
-	return new_rerere_id_hex(sha1_to_hex(sha1));
+	return new_rerere_id_hex(hash_to_hex(hash));
 }
 
 /*
diff --git a/revision.c b/revision.c
index 442a705..0e39b2b 100644
--- a/revision.c
+++ b/revision.c
@@ -28,6 +28,7 @@
 #include "commit-graph.h"
 #include "prio-queue.h"
 #include "hashmap.h"
+#include "utf8.h"
 
 volatile show_early_output_fn_t show_early_output;
 
@@ -107,30 +108,34 @@
 };
 
 static int path_and_oids_cmp(const void *hashmap_cmp_fn_data,
-			     const struct path_and_oids_entry *e1,
-			     const struct path_and_oids_entry *e2,
+			     const struct hashmap_entry *eptr,
+			     const struct hashmap_entry *entry_or_key,
 			     const void *keydata)
 {
+	const struct path_and_oids_entry *e1, *e2;
+
+	e1 = container_of(eptr, const struct path_and_oids_entry, ent);
+	e2 = container_of(entry_or_key, const struct path_and_oids_entry, ent);
+
 	return strcmp(e1->path, e2->path);
 }
 
 static void paths_and_oids_init(struct hashmap *map)
 {
-	hashmap_init(map, (hashmap_cmp_fn) path_and_oids_cmp, NULL, 0);
+	hashmap_init(map, path_and_oids_cmp, NULL, 0);
 }
 
 static void paths_and_oids_clear(struct hashmap *map)
 {
 	struct hashmap_iter iter;
 	struct path_and_oids_entry *entry;
-	hashmap_iter_init(map, &iter);
 
-	while ((entry = (struct path_and_oids_entry *)hashmap_iter_next(&iter))) {
+	hashmap_for_each_entry(map, &iter, entry, ent /* member name */) {
 		oidset_clear(&entry->trees);
 		free(entry->path);
 	}
 
-	hashmap_free(map, 1);
+	hashmap_free_entries(map, struct path_and_oids_entry, ent);
 }
 
 static void paths_and_oids_insert(struct hashmap *map,
@@ -141,18 +146,19 @@
 	struct path_and_oids_entry key;
 	struct path_and_oids_entry *entry;
 
-	hashmap_entry_init(&key, hash);
+	hashmap_entry_init(&key.ent, hash);
 
 	/* use a shallow copy for the lookup */
 	key.path = (char *)path;
 	oidset_init(&key.trees, 0);
 
-	if (!(entry = (struct path_and_oids_entry *)hashmap_get(map, &key, NULL))) {
+	entry = hashmap_get_entry(map, &key, ent, NULL);
+	if (!entry) {
 		entry = xcalloc(1, sizeof(struct path_and_oids_entry));
-		hashmap_entry_init(entry, hash);
+		hashmap_entry_init(&entry->ent, hash);
 		entry->path = xstrdup(key.path);
 		oidset_init(&entry->trees, 16);
-		hashmap_put(map, entry);
+		hashmap_put(map, &entry->ent);
 	}
 
 	oidset_insert(&entry->trees, oid);
@@ -235,8 +241,7 @@
 		add_children_by_path(r, tree, &map);
 	}
 
-	hashmap_iter_init(&map, &map_iter);
-	while ((entry = hashmap_iter_next(&map_iter)))
+	hashmap_for_each_entry(&map, &map_iter, entry, ent /* member name */)
 		mark_trees_uninteresting_sparse(r, &entry->trees);
 
 	paths_and_oids_clear(&map);
@@ -2687,6 +2692,8 @@
 
 	grep_commit_pattern_type(GREP_PATTERN_TYPE_UNSPECIFIED,
 				 &revs->grep_filter);
+	if (!is_encoding_utf8(get_log_output_encoding()))
+		revs->grep_filter.ignore_locale = 1;
 	compile_grep_patterns(&revs->grep_filter);
 
 	if (revs->reverse && revs->reflog_info)
diff --git a/sequencer.c b/sequencer.c
index d648aaf..9d5964f 100644
--- a/sequencer.c
+++ b/sequencer.c
@@ -586,7 +586,7 @@
 			      struct replay_opts *opts)
 {
 	struct merge_options o;
-	struct tree *result, *next_tree, *base_tree, *head_tree;
+	struct tree *next_tree, *base_tree, *head_tree;
 	int clean;
 	char **xopt;
 	struct lock_file index_lock = LOCK_INIT;
@@ -613,11 +613,10 @@
 
 	clean = merge_trees(&o,
 			    head_tree,
-			    next_tree, base_tree, &result);
+			    next_tree, base_tree);
 	if (is_rebase_i(opts) && clean <= 0)
 		fputs(o.obuf.buf, stdout);
 	strbuf_release(&o.obuf);
-	diff_warn_rename_limit("merge.renamelimit", o.needed_rename_limit, 0);
 	if (clean < 0) {
 		rollback_lock_file(&index_lock);
 		return clean;
@@ -869,34 +868,6 @@
 	return NULL;
 }
 
-/* Read author-script and return an ident line (author <email> timestamp) */
-static const char *read_author_ident(struct strbuf *buf)
-{
-	struct strbuf out = STRBUF_INIT;
-	char *name, *email, *date;
-
-	if (read_author_script(rebase_path_author_script(),
-			       &name, &email, &date, 0))
-		return NULL;
-
-	/* validate date since fmt_ident() will die() on bad value */
-	if (parse_date(date, &out)){
-		warning(_("invalid date format '%s' in '%s'"),
-			date, rebase_path_author_script());
-		strbuf_release(&out);
-		return NULL;
-	}
-
-	strbuf_reset(&out);
-	strbuf_addstr(&out, fmt_ident(name, email, WANT_AUTHOR_IDENT, date, 0));
-	strbuf_swap(buf, &out);
-	strbuf_release(&out);
-	free(name);
-	free(email);
-	free(date);
-	return buf->buf;
-}
-
 static const char staged_changes_advice[] =
 N_("you have staged changes in your working tree\n"
 "If these changes are meant to be squashed into the previous commit, run:\n"
@@ -954,47 +925,6 @@
 {
 	struct child_process cmd = CHILD_PROCESS_INIT;
 
-	if ((flags & CREATE_ROOT_COMMIT) && !(flags & AMEND_MSG)) {
-		struct strbuf msg = STRBUF_INIT, script = STRBUF_INIT;
-		const char *author = NULL;
-		struct object_id root_commit, *cache_tree_oid;
-		int res = 0;
-
-		if (is_rebase_i(opts)) {
-			author = read_author_ident(&script);
-			if (!author) {
-				strbuf_release(&script);
-				return -1;
-			}
-		}
-
-		if (!defmsg)
-			BUG("root commit without message");
-
-		if (!(cache_tree_oid = get_cache_tree_oid(r->index)))
-			res = -1;
-
-		if (!res)
-			res = strbuf_read_file(&msg, defmsg, 0);
-
-		if (res <= 0)
-			res = error_errno(_("could not read '%s'"), defmsg);
-		else
-			res = commit_tree(msg.buf, msg.len, cache_tree_oid,
-					  NULL, &root_commit, author,
-					  opts->gpg_sign);
-
-		strbuf_release(&msg);
-		strbuf_release(&script);
-		if (!res) {
-			update_ref(NULL, "CHERRY_PICK_HEAD", &root_commit, NULL,
-				   REF_NO_DEREF, UPDATE_REFS_MSG_ON_ERR);
-			res = update_ref(NULL, "HEAD", &root_commit, NULL, 0,
-					 UPDATE_REFS_MSG_ON_ERR);
-		}
-		return res < 0 ? error(_("writing root commit")) : 0;
-	}
-
 	cmd.git_cmd = 1;
 
 	if (is_rebase_i(opts) && read_env_script(&cmd.env_array)) {
@@ -1378,7 +1308,7 @@
 			 struct object_id *oid)
 {
 	struct object_id tree;
-	struct commit *current_head;
+	struct commit *current_head = NULL;
 	struct commit_list *parents = NULL;
 	struct commit_extra_header *extra = NULL;
 	struct strbuf err = STRBUF_INIT;
@@ -1413,7 +1343,8 @@
 		}
 		parents = copy_commit_list(current_head->parents);
 		extra = read_commit_extra_headers(current_head, exclude_gpgsig);
-	} else if (current_head) {
+	} else if (current_head &&
+		   (!(flags & CREATE_ROOT_COMMIT) || (flags & AMEND_MSG))) {
 		commit_list_insert(current_head, &parents);
 	}
 
@@ -1490,8 +1421,7 @@
 {
 	int res = 1;
 
-	if (!(flags & EDIT_MSG) && !(flags & VERIFY_MSG) &&
-	    !(flags & CREATE_ROOT_COMMIT)) {
+	if (!(flags & EDIT_MSG) && !(flags & VERIFY_MSG)) {
 		struct object_id oid;
 		struct strbuf sb = STRBUF_INIT;
 
@@ -1775,7 +1705,7 @@
 			  enum todo_command command,
 			  struct commit *commit,
 			  struct replay_opts *opts,
-			  int final_fixup)
+			  int final_fixup, int *check_todo)
 {
 	unsigned int flags = opts->edit ? EDIT_MSG : 0;
 	const char *msg_file = opts->edit ? NULL : git_path_merge_msg(r);
@@ -1785,7 +1715,7 @@
 	char *author = NULL;
 	struct commit_message msg = { NULL, NULL, NULL, NULL };
 	struct strbuf msgbuf = STRBUF_INIT;
-	int res, unborn = 0, allow;
+	int res, unborn = 0, reword = 0, allow;
 
 	if (opts->no_commit) {
 		/*
@@ -1855,7 +1785,7 @@
 			opts);
 		if (res || command != TODO_REWORD)
 			goto leave;
-		flags |= EDIT_MSG | AMEND_MSG | VERIFY_MSG;
+		reword = 1;
 		msg_file = NULL;
 		goto fast_forward_edit;
 	}
@@ -1913,7 +1843,7 @@
 	}
 
 	if (command == TODO_REWORD)
-		flags |= EDIT_MSG | VERIFY_MSG;
+		reword = 1;
 	else if (is_fixup(command)) {
 		if (update_squash_messages(r, command, commit, opts))
 			return -1;
@@ -1997,13 +1927,21 @@
 	} else if (allow)
 		flags |= ALLOW_EMPTY;
 	if (!opts->no_commit) {
-fast_forward_edit:
 		if (author || command == TODO_REVERT || (flags & AMEND_MSG))
 			res = do_commit(r, msg_file, author, opts, flags);
 		else
 			res = error(_("unable to parse commit author"));
+		*check_todo = !!(flags & EDIT_MSG);
+		if (!res && reword) {
+fast_forward_edit:
+			res = run_git_commit(r, NULL, opts, EDIT_MSG |
+					     VERIFY_MSG | AMEND_MSG |
+					     (flags & ALLOW_EMPTY));
+			*check_todo = 1;
+		}
 	}
 
+
 	if (!res && final_fixup) {
 		unlink(rebase_path_fixup_msg());
 		unlink(rebase_path_squash_msg());
@@ -3578,7 +3516,7 @@
 		goto leave_merge;
 	}
 
-	write_message(oid_to_hex(&merge_commit->object.oid), GIT_SHA1_HEXSZ,
+	write_message(oid_to_hex(&merge_commit->object.oid), the_hash_algo->hexsz,
 		      git_path_merge_head(r), 0);
 	write_message("no-ff", 5, git_path_merge_mode(r), 0);
 
@@ -3828,6 +3766,7 @@
 	while (todo_list->current < todo_list->nr) {
 		struct todo_item *item = todo_list->items + todo_list->current;
 		const char *arg = todo_item_get_arg(todo_list, item);
+		int check_todo = 0;
 
 		if (save_todo(todo_list, opts))
 			return -1;
@@ -3866,7 +3805,8 @@
 					command_to_string(item->command), NULL),
 					1);
 			res = do_pick_commit(r, item->command, item->commit,
-					opts, is_final_fixup(todo_list));
+					     opts, is_final_fixup(todo_list),
+					     &check_todo);
 			if (is_rebase_i(opts) && res < 0) {
 				/* Reschedule */
 				advise(_(rescheduled_advice),
@@ -3923,7 +3863,6 @@
 		} else if (item->command == TODO_EXEC) {
 			char *end_of_arg = (char *)(arg + item->arg_len);
 			int saved = *end_of_arg;
-			struct stat st;
 
 			if (!opts->verbose)
 				term_clear_line();
@@ -3934,17 +3873,8 @@
 			if (res) {
 				if (opts->reschedule_failed_exec)
 					reschedule = 1;
-			} else if (stat(get_todo_path(opts), &st))
-				res = error_errno(_("could not stat '%s'"),
-						  get_todo_path(opts));
-			else if (match_stat_data(&todo_list->stat, &st)) {
-				/* Reread the todo file if it has changed. */
-				todo_list_release(todo_list);
-				if (read_populate_todo(r, todo_list, opts))
-					res = -1; /* message was printed */
-				/* `current` will be incremented below */
-				todo_list->current = -1;
 			}
+			check_todo = 1;
 		} else if (item->command == TODO_LABEL) {
 			if ((res = do_label(r, arg, item->arg_len)))
 				reschedule = 1;
@@ -3980,6 +3910,20 @@
 							item->commit,
 							arg, item->arg_len,
 							opts, res, 0);
+		} else if (check_todo && !res) {
+			struct stat st;
+
+			if (stat(get_todo_path(opts), &st)) {
+				res = error_errno(_("could not stat '%s'"),
+						  get_todo_path(opts));
+			} else if (match_stat_data(&todo_list->stat, &st)) {
+				/* Reread the todo file if it has changed. */
+				todo_list_release(todo_list);
+				if (read_populate_todo(r, todo_list, opts))
+					res = -1; /* message was printed */
+				/* `current` will be incremented below */
+				todo_list->current = -1;
+			}
 		}
 
 		todo_list->current++;
@@ -4306,9 +4250,12 @@
 		       struct commit *cmit,
 		       struct replay_opts *opts)
 {
+	int check_todo;
+
 	setenv(GIT_REFLOG_ACTION, action_name(opts), 0);
 	return do_pick_commit(r, opts->action == REPLAY_PICK ?
-		TODO_PICK : TODO_REVERT, cmit, opts, 0);
+			      TODO_PICK : TODO_REVERT, cmit, opts, 0,
+			      &check_todo);
 }
 
 int sequencer_pick_revisions(struct repository *r,
@@ -4450,9 +4397,14 @@
 	char label[FLEX_ARRAY];
 };
 
-static int labels_cmp(const void *fndata, const struct labels_entry *a,
-		      const struct labels_entry *b, const void *key)
+static int labels_cmp(const void *fndata, const struct hashmap_entry *eptr,
+		      const struct hashmap_entry *entry_or_key, const void *key)
 {
+	const struct labels_entry *a, *b;
+
+	a = container_of(eptr, const struct labels_entry, entry);
+	b = container_of(entry_or_key, const struct labels_entry, entry);
+
 	return key ? strcmp(a->label, key) : strcmp(a->label, b->label);
 }
 
@@ -4497,7 +4449,7 @@
 		char *p;
 
 		strbuf_reset(&state->buf);
-		strbuf_grow(&state->buf, GIT_SHA1_HEXSZ);
+		strbuf_grow(&state->buf, GIT_MAX_HEXSZ);
 		label = p = state->buf.buf;
 
 		find_unique_abbrev_r(p, oid, default_abbrev);
@@ -4510,7 +4462,7 @@
 			size_t i = strlen(p) + 1;
 
 			oid_to_hex_r(p, oid);
-			for (; i < GIT_SHA1_HEXSZ; i++) {
+			for (; i < the_hash_algo->hexsz; i++) {
 				char save = p[i];
 				p[i] = '\0';
 				if (!hashmap_get_from_hash(&state->labels,
@@ -4548,8 +4500,8 @@
 	}
 
 	FLEX_ALLOC_STR(labels_entry, label, label);
-	hashmap_entry_init(labels_entry, strihash(label));
-	hashmap_add(&state->labels, labels_entry);
+	hashmap_entry_init(&labels_entry->entry, strihash(label));
+	hashmap_add(&state->labels, &labels_entry->entry);
 
 	FLEX_ALLOC_STR(string_entry, string, label);
 	oidcpy(&string_entry->entry.oid, oid);
@@ -4584,7 +4536,7 @@
 
 	oidmap_init(&commit2todo, 0);
 	oidmap_init(&state.commit2label, 0);
-	hashmap_init(&state.labels, (hashmap_cmp_fn) labels_cmp, NULL, 0);
+	hashmap_init(&state.labels, labels_cmp, NULL, 0);
 	strbuf_init(&state.buf, 32);
 
 	if (revs->cmdline.nr && (revs->cmdline.rev[0].flags & BOTTOM)) {
@@ -4779,7 +4731,7 @@
 
 	oidmap_free(&commit2todo, 1);
 	oidmap_free(&state.commit2label, 1);
-	hashmap_free(&state.labels, 1);
+	hashmap_free_entries(&state.labels, struct labels_entry, entry);
 	strbuf_release(&state.buf);
 
 	return 0;
@@ -5150,9 +5102,15 @@
 };
 
 static int subject2item_cmp(const void *fndata,
-			    const struct subject2item_entry *a,
-			    const struct subject2item_entry *b, const void *key)
+			    const struct hashmap_entry *eptr,
+			    const struct hashmap_entry *entry_or_key,
+			    const void *key)
 {
+	const struct subject2item_entry *a, *b;
+
+	a = container_of(eptr, const struct subject2item_entry, entry);
+	b = container_of(entry_or_key, const struct subject2item_entry, entry);
+
 	return key ? strcmp(a->subject, key) : strcmp(a->subject, b->subject);
 }
 
@@ -5185,8 +5143,7 @@
 	 * In that case, last[i] will indicate the index of the latest item to
 	 * be moved to appear after the i'th.
 	 */
-	hashmap_init(&subject2item, (hashmap_cmp_fn) subject2item_cmp,
-		     NULL, todo_list->nr);
+	hashmap_init(&subject2item, subject2item_cmp, NULL, todo_list->nr);
 	ALLOC_ARRAY(next, todo_list->nr);
 	ALLOC_ARRAY(tail, todo_list->nr);
 	ALLOC_ARRAY(subjects, todo_list->nr);
@@ -5229,8 +5186,11 @@
 					break;
 			}
 
-			if ((entry = hashmap_get_from_hash(&subject2item,
-							   strhash(p), p)))
+			entry = hashmap_get_entry_from_hash(&subject2item,
+						strhash(p), p,
+						struct subject2item_entry,
+						entry);
+			if (entry)
 				/* found by title */
 				i2 = entry->i;
 			else if (!strchr(p, ' ') &&
@@ -5264,8 +5224,9 @@
 						strhash(subject), subject)) {
 			FLEX_ALLOC_MEM(entry, subject, subject, subject_len);
 			entry->i = i;
-			hashmap_entry_init(entry, strhash(entry->subject));
-			hashmap_put(&subject2item, entry);
+			hashmap_entry_init(&entry->entry,
+					strhash(entry->subject));
+			hashmap_put(&subject2item, &entry->entry);
 		}
 	}
 
@@ -5299,7 +5260,7 @@
 	for (i = 0; i < todo_list->nr; i++)
 		free(subjects[i]);
 	free(subjects);
-	hashmap_free(&subject2item, 1);
+	hashmap_free_entries(&subject2item, struct subject2item_entry, entry);
 
 	clear_commit_todo_item(&commit_todo);
 
diff --git a/sha1-file.c b/sha1-file.c
index e85f249..188de57 100644
--- a/sha1-file.c
+++ b/sha1-file.c
@@ -55,7 +55,6 @@
 	"\x6f\xe1\x41\xf7\x74\x91\x20\xa3\x03\x72" \
 	"\x18\x13"
 
-const unsigned char null_sha1[GIT_MAX_RAWSZ];
 const struct object_id null_oid;
 static const struct object_id empty_tree_oid = {
 	EMPTY_TREE_SHA1_BIN_LITERAL
diff --git a/sha1-lookup.c b/sha1-lookup.c
index 796ab68..2918584 100644
--- a/sha1-lookup.c
+++ b/sha1-lookup.c
@@ -50,7 +50,7 @@
  * The sha1 of element i (between 0 and nr - 1) should be returned
  * by "fn(i, table)".
  */
-int sha1_pos(const unsigned char *sha1, void *table, size_t nr,
+int sha1_pos(const unsigned char *hash, void *table, size_t nr,
 	     sha1_access_fn fn)
 {
 	size_t hi = nr;
@@ -63,14 +63,14 @@
 	if (nr != 1) {
 		size_t lov, hiv, miv, ofs;
 
-		for (ofs = 0; ofs < 18; ofs += 2) {
+		for (ofs = 0; ofs < the_hash_algo->rawsz - 2; ofs += 2) {
 			lov = take2(fn(0, table) + ofs);
 			hiv = take2(fn(nr - 1, table) + ofs);
-			miv = take2(sha1 + ofs);
+			miv = take2(hash + ofs);
 			if (miv < lov)
 				return -1;
 			if (hiv < miv)
-				return -1 - nr;
+				return index_pos_to_insert_pos(nr);
 			if (lov != hiv) {
 				/*
 				 * At this point miv could be equal
@@ -88,7 +88,7 @@
 
 	do {
 		int cmp;
-		cmp = hashcmp(fn(mi, table), sha1);
+		cmp = hashcmp(fn(mi, table), hash);
 		if (!cmp)
 			return mi;
 		if (cmp > 0)
@@ -97,7 +97,7 @@
 			lo = mi + 1;
 		mi = lo + (hi - lo) / 2;
 	} while (lo < hi);
-	return -lo-1;
+	return index_pos_to_insert_pos(lo);
 }
 
 int bsearch_hash(const unsigned char *sha1, const uint32_t *fanout_nbo,
diff --git a/shallow.c b/shallow.c
index 5fa2b15..7fd04af 100644
--- a/shallow.c
+++ b/shallow.c
@@ -12,10 +12,7 @@
 #include "diff.h"
 #include "revision.h"
 #include "commit-slab.h"
-#include "revision.h"
 #include "list-objects.h"
-#include "commit-slab.h"
-#include "repository.h"
 #include "commit-reach.h"
 
 void set_alternate_shallow_file(struct repository *r, const char *path, int override)
@@ -156,6 +153,8 @@
 	for (i = 0; i < depths.slab_count; i++) {
 		int j;
 
+		if (!depths.slab[i])
+			continue;
 		for (j = 0; j < depths.slab_size; j++)
 			free(depths.slab[i][j]);
 	}
diff --git a/compat/qsort.c b/stable-qsort.c
similarity index 88%
rename from compat/qsort.c
rename to stable-qsort.c
index 7d071af..6cbaf39 100644
--- a/compat/qsort.c
+++ b/stable-qsort.c
@@ -1,4 +1,4 @@
-#include "../git-compat-util.h"
+#include "git-compat-util.h"
 
 /*
  * A merge sort implementation, simplified from the qsort implementation
@@ -44,8 +44,8 @@
 	memcpy(b, t, (n - n2) * s);
 }
 
-void git_qsort(void *b, size_t n, size_t s,
-	       int (*cmp)(const void *, const void *))
+void git_stable_qsort(void *b, size_t n, size_t s,
+		      int (*cmp)(const void *, const void *))
 {
 	const size_t size = st_mult(n, s);
 	char buf[1024];
diff --git a/sub-process.c b/sub-process.c
index 3f4af93..1b1af9d 100644
--- a/sub-process.c
+++ b/sub-process.c
@@ -6,12 +6,14 @@
 #include "pkt-line.h"
 
 int cmd2process_cmp(const void *unused_cmp_data,
-		    const void *entry,
-		    const void *entry_or_key,
+		    const struct hashmap_entry *eptr,
+		    const struct hashmap_entry *entry_or_key,
 		    const void *unused_keydata)
 {
-	const struct subprocess_entry *e1 = entry;
-	const struct subprocess_entry *e2 = entry_or_key;
+	const struct subprocess_entry *e1, *e2;
+
+	e1 = container_of(eptr, const struct subprocess_entry, ent);
+	e2 = container_of(entry_or_key, const struct subprocess_entry, ent);
 
 	return strcmp(e1->cmd, e2->cmd);
 }
@@ -20,9 +22,9 @@
 {
 	struct subprocess_entry key;
 
-	hashmap_entry_init(&key, strhash(cmd));
+	hashmap_entry_init(&key.ent, strhash(cmd));
 	key.cmd = cmd;
-	return hashmap_get(hashmap, &key, NULL);
+	return hashmap_get_entry(hashmap, &key, ent, NULL);
 }
 
 int subprocess_read_status(int fd, struct strbuf *status)
@@ -58,7 +60,7 @@
 	kill(entry->process.pid, SIGTERM);
 	finish_command(&entry->process);
 
-	hashmap_remove(hashmap, entry, NULL);
+	hashmap_remove(hashmap, &entry->ent, NULL);
 }
 
 static void subprocess_exit_handler(struct child_process *process)
@@ -96,7 +98,7 @@
 		return err;
 	}
 
-	hashmap_entry_init(entry, strhash(cmd));
+	hashmap_entry_init(&entry->ent, strhash(cmd));
 
 	err = startfn(entry);
 	if (err) {
@@ -105,7 +107,7 @@
 		return err;
 	}
 
-	hashmap_add(hashmap, entry);
+	hashmap_add(hashmap, &entry->ent);
 	return 0;
 }
 
diff --git a/sub-process.h b/sub-process.h
index 5c182fa..e85f21f 100644
--- a/sub-process.h
+++ b/sub-process.h
@@ -24,7 +24,7 @@
 
 /* Members should not be accessed directly. */
 struct subprocess_entry {
-	struct hashmap_entry ent; /* must be the first member! */
+	struct hashmap_entry ent;
 	const char *cmd;
 	struct child_process process;
 };
@@ -43,8 +43,8 @@
 
 /* Function to test two subprocess hashmap entries for equality. */
 int cmd2process_cmp(const void *unused_cmp_data,
-		    const void *e1,
-		    const void *e2,
+		    const struct hashmap_entry *e,
+		    const struct hashmap_entry *entry_or_key,
 		    const void *unused_keydata);
 
 /*
diff --git a/submodule-config.c b/submodule-config.c
index 4264ee2..401a9b2 100644
--- a/submodule-config.c
+++ b/submodule-config.c
@@ -38,24 +38,28 @@
 };
 
 static int config_path_cmp(const void *unused_cmp_data,
-			   const void *entry,
-			   const void *entry_or_key,
+			   const struct hashmap_entry *eptr,
+			   const struct hashmap_entry *entry_or_key,
 			   const void *unused_keydata)
 {
-	const struct submodule_entry *a = entry;
-	const struct submodule_entry *b = entry_or_key;
+	const struct submodule_entry *a, *b;
+
+	a = container_of(eptr, const struct submodule_entry, ent);
+	b = container_of(entry_or_key, const struct submodule_entry, ent);
 
 	return strcmp(a->config->path, b->config->path) ||
 	       !oideq(&a->config->gitmodules_oid, &b->config->gitmodules_oid);
 }
 
 static int config_name_cmp(const void *unused_cmp_data,
-			   const void *entry,
-			   const void *entry_or_key,
+			   const struct hashmap_entry *eptr,
+			   const struct hashmap_entry *entry_or_key,
 			   const void *unused_keydata)
 {
-	const struct submodule_entry *a = entry;
-	const struct submodule_entry *b = entry_or_key;
+	const struct submodule_entry *a, *b;
+
+	a = container_of(eptr, const struct submodule_entry, ent);
+	b = container_of(entry_or_key, const struct submodule_entry, ent);
 
 	return strcmp(a->config->name, b->config->name) ||
 	       !oideq(&a->config->gitmodules_oid, &b->config->gitmodules_oid);
@@ -95,12 +99,12 @@
 	 * allocation of struct submodule entries. Each is allocated by
 	 * their .gitmodules blob sha1 and submodule name.
 	 */
-	hashmap_iter_init(&cache->for_name, &iter);
-	while ((entry = hashmap_iter_next(&iter)))
+	hashmap_for_each_entry(&cache->for_name, &iter, entry,
+				ent /* member name */)
 		free_one_config(entry);
 
-	hashmap_free(&cache->for_path, 1);
-	hashmap_free(&cache->for_name, 1);
+	hashmap_free_entries(&cache->for_path, struct submodule_entry, ent);
+	hashmap_free_entries(&cache->for_name, struct submodule_entry, ent);
 	cache->initialized = 0;
 	cache->gitmodules_read = 0;
 }
@@ -123,9 +127,9 @@
 	unsigned int hash = hash_oid_string(&submodule->gitmodules_oid,
 					    submodule->path);
 	struct submodule_entry *e = xmalloc(sizeof(*e));
-	hashmap_entry_init(e, hash);
+	hashmap_entry_init(&e->ent, hash);
 	e->config = submodule;
-	hashmap_put(&cache->for_path, e);
+	hashmap_put(&cache->for_path, &e->ent);
 }
 
 static void cache_remove_path(struct submodule_cache *cache,
@@ -135,9 +139,9 @@
 					    submodule->path);
 	struct submodule_entry e;
 	struct submodule_entry *removed;
-	hashmap_entry_init(&e, hash);
+	hashmap_entry_init(&e.ent, hash);
 	e.config = submodule;
-	removed = hashmap_remove(&cache->for_path, &e, NULL);
+	removed = hashmap_remove_entry(&cache->for_path, &e, ent, NULL);
 	free(removed);
 }
 
@@ -147,9 +151,9 @@
 	unsigned int hash = hash_oid_string(&submodule->gitmodules_oid,
 					    submodule->name);
 	struct submodule_entry *e = xmalloc(sizeof(*e));
-	hashmap_entry_init(e, hash);
+	hashmap_entry_init(&e->ent, hash);
 	e->config = submodule;
-	hashmap_add(&cache->for_name, e);
+	hashmap_add(&cache->for_name, &e->ent);
 }
 
 static const struct submodule *cache_lookup_path(struct submodule_cache *cache,
@@ -163,10 +167,10 @@
 	oidcpy(&key_config.gitmodules_oid, gitmodules_oid);
 	key_config.path = path;
 
-	hashmap_entry_init(&key, hash);
+	hashmap_entry_init(&key.ent, hash);
 	key.config = &key_config;
 
-	entry = hashmap_get(&cache->for_path, &key, NULL);
+	entry = hashmap_get_entry(&cache->for_path, &key, ent, NULL);
 	if (entry)
 		return entry->config;
 	return NULL;
@@ -183,10 +187,10 @@
 	oidcpy(&key_config.gitmodules_oid, gitmodules_oid);
 	key_config.name = name;
 
-	hashmap_entry_init(&key, hash);
+	hashmap_entry_init(&key.ent, hash);
 	key.config = &key_config;
 
-	entry = hashmap_get(&cache->for_name, &key, NULL);
+	entry = hashmap_get_entry(&cache->for_name, &key, ent, NULL);
 	if (entry)
 		return entry->config;
 	return NULL;
@@ -550,7 +554,9 @@
 		struct hashmap_iter iter;
 		struct submodule_entry *entry;
 
-		entry = hashmap_iter_first(&cache->for_name, &iter);
+		entry = hashmap_iter_first_entry(&cache->for_name, &iter,
+						struct submodule_entry,
+						ent /* member name */);
 		if (!entry)
 			return NULL;
 		return entry->config;
diff --git a/t/helper/test-hashmap.c b/t/helper/test-hashmap.c
index aaf17b0..f387062 100644
--- a/t/helper/test-hashmap.c
+++ b/t/helper/test-hashmap.c
@@ -5,6 +5,7 @@
 
 struct test_entry
 {
+	int padding; /* hashmap entry no longer needs to be the first member */
 	struct hashmap_entry ent;
 	/* key and value as two \0-terminated strings */
 	char key[FLEX_ARRAY];
@@ -16,15 +17,17 @@
 }
 
 static int test_entry_cmp(const void *cmp_data,
-			  const void *entry,
-			  const void *entry_or_key,
+			  const struct hashmap_entry *eptr,
+			  const struct hashmap_entry *entry_or_key,
 			  const void *keydata)
 {
 	const int ignore_case = cmp_data ? *((int *)cmp_data) : 0;
-	const struct test_entry *e1 = entry;
-	const struct test_entry *e2 = entry_or_key;
+	const struct test_entry *e1, *e2;
 	const char *key = keydata;
 
+	e1 = container_of(eptr, const struct test_entry, ent);
+	e2 = container_of(entry_or_key, const struct test_entry, ent);
+
 	if (ignore_case)
 		return strcasecmp(e1->key, key ? key : e2->key);
 	else
@@ -37,7 +40,7 @@
 	size_t klen = strlen(key);
 	size_t vlen = strlen(value);
 	struct test_entry *entry = xmalloc(st_add4(sizeof(*entry), klen, vlen, 2));
-	hashmap_entry_init(entry, hash);
+	hashmap_entry_init(&entry->ent, hash);
 	memcpy(entry->key, key, klen + 1);
 	memcpy(entry->key + klen + 1, value, vlen + 1);
 	return entry;
@@ -103,11 +106,11 @@
 
 			/* add entries */
 			for (i = 0; i < TEST_SIZE; i++) {
-				hashmap_entry_init(entries[i], hashes[i]);
-				hashmap_add(&map, entries[i]);
+				hashmap_entry_init(&entries[i]->ent, hashes[i]);
+				hashmap_add(&map, &entries[i]->ent);
 			}
 
-			hashmap_free(&map, 0);
+			hashmap_free(&map);
 		}
 	} else {
 		/* test map lookups */
@@ -116,8 +119,8 @@
 		/* fill the map (sparsely if specified) */
 		j = (method & TEST_SPARSE) ? TEST_SIZE / 10 : TEST_SIZE;
 		for (i = 0; i < j; i++) {
-			hashmap_entry_init(entries[i], hashes[i]);
-			hashmap_add(&map, entries[i]);
+			hashmap_entry_init(&entries[i]->ent, hashes[i]);
+			hashmap_add(&map, &entries[i]->ent);
 		}
 
 		for (j = 0; j < rounds; j++) {
@@ -127,7 +130,7 @@
 			}
 		}
 
-		hashmap_free(&map, 0);
+		hashmap_free(&map);
 	}
 }
 
@@ -179,7 +182,7 @@
 			entry = alloc_test_entry(hash, p1, p2);
 
 			/* add to hashmap */
-			hashmap_add(&map, entry);
+			hashmap_add(&map, &entry->ent);
 
 		} else if (!strcmp("put", cmd) && p1 && p2) {
 
@@ -187,43 +190,44 @@
 			entry = alloc_test_entry(hash, p1, p2);
 
 			/* add / replace entry */
-			entry = hashmap_put(&map, entry);
+			entry = hashmap_put_entry(&map, entry, ent);
 
 			/* print and free replaced entry, if any */
 			puts(entry ? get_value(entry) : "NULL");
 			free(entry);
 
 		} else if (!strcmp("get", cmd) && p1) {
-
 			/* lookup entry in hashmap */
-			entry = hashmap_get_from_hash(&map, hash, p1);
+			entry = hashmap_get_entry_from_hash(&map, hash, p1,
+							struct test_entry, ent);
 
 			/* print result */
 			if (!entry)
 				puts("NULL");
-			while (entry) {
+			hashmap_for_each_entry_from(&map, entry, ent)
 				puts(get_value(entry));
-				entry = hashmap_get_next(&map, entry);
-			}
 
 		} else if (!strcmp("remove", cmd) && p1) {
 
 			/* setup static key */
 			struct hashmap_entry key;
+			struct hashmap_entry *rm;
 			hashmap_entry_init(&key, hash);
 
 			/* remove entry from hashmap */
-			entry = hashmap_remove(&map, &key, p1);
+			rm = hashmap_remove(&map, &key, p1);
+			entry = rm ? container_of(rm, struct test_entry, ent)
+					: NULL;
 
 			/* print result and free entry*/
 			puts(entry ? get_value(entry) : "NULL");
 			free(entry);
 
 		} else if (!strcmp("iterate", cmd)) {
-
 			struct hashmap_iter iter;
-			hashmap_iter_init(&map, &iter);
-			while ((entry = hashmap_iter_next(&iter)))
+
+			hashmap_for_each_entry(&map, &iter, entry,
+						ent /* member name */)
 				printf("%s %s\n", entry->key, get_value(entry));
 
 		} else if (!strcmp("size", cmd)) {
@@ -258,6 +262,6 @@
 	}
 
 	strbuf_release(&line);
-	hashmap_free(&map, 1);
+	hashmap_free_entries(&map, struct test_entry, ent);
 	return 0;
 }
diff --git a/t/helper/test-lazy-init-name-hash.c b/t/helper/test-lazy-init-name-hash.c
index b99a370..cd1b4c9 100644
--- a/t/helper/test-lazy-init-name-hash.c
+++ b/t/helper/test-lazy-init-name-hash.c
@@ -41,17 +41,13 @@
 			die("non-threaded code path used");
 	}
 
-	dir = hashmap_iter_first(&the_index.dir_hash, &iter_dir);
-	while (dir) {
+	hashmap_for_each_entry(&the_index.dir_hash, &iter_dir, dir,
+				ent /* member name */)
 		printf("dir %08x %7d %s\n", dir->ent.hash, dir->nr, dir->name);
-		dir = hashmap_iter_next(&iter_dir);
-	}
 
-	ce = hashmap_iter_first(&the_index.name_hash, &iter_cache);
-	while (ce) {
+	hashmap_for_each_entry(&the_index.name_hash, &iter_cache, ce,
+				ent /* member name */)
 		printf("name %08x %s\n", ce->ent.hash, ce->name);
-		ce = hashmap_iter_next(&iter_cache);
-	}
 
 	discard_cache();
 }
diff --git a/t/helper/test-run-command.c b/t/helper/test-run-command.c
index 2cc93bb..ead6dc6 100644
--- a/t/helper/test-run-command.c
+++ b/t/helper/test-run-command.c
@@ -10,9 +10,14 @@
 
 #include "test-tool.h"
 #include "git-compat-util.h"
+#include "cache.h"
 #include "run-command.h"
 #include "argv-array.h"
 #include "strbuf.h"
+#include "parse-options.h"
+#include "string-list.h"
+#include "thread-utils.h"
+#include "wildmatch.h"
 #include <string.h>
 #include <errno.h>
 
@@ -50,11 +55,159 @@
 	return 1;
 }
 
+struct testsuite {
+	struct string_list tests, failed;
+	int next;
+	int quiet, immediate, verbose, verbose_log, trace, write_junit_xml;
+};
+#define TESTSUITE_INIT \
+	{ STRING_LIST_INIT_DUP, STRING_LIST_INIT_DUP, -1, 0, 0, 0, 0, 0, 0 }
+
+static int next_test(struct child_process *cp, struct strbuf *err, void *cb,
+		     void **task_cb)
+{
+	struct testsuite *suite = cb;
+	const char *test;
+	if (suite->next >= suite->tests.nr)
+		return 0;
+
+	test = suite->tests.items[suite->next++].string;
+	argv_array_pushl(&cp->args, "sh", test, NULL);
+	if (suite->quiet)
+		argv_array_push(&cp->args, "--quiet");
+	if (suite->immediate)
+		argv_array_push(&cp->args, "-i");
+	if (suite->verbose)
+		argv_array_push(&cp->args, "-v");
+	if (suite->verbose_log)
+		argv_array_push(&cp->args, "-V");
+	if (suite->trace)
+		argv_array_push(&cp->args, "-x");
+	if (suite->write_junit_xml)
+		argv_array_push(&cp->args, "--write-junit-xml");
+
+	strbuf_addf(err, "Output of '%s':\n", test);
+	*task_cb = (void *)test;
+
+	return 1;
+}
+
+static int test_finished(int result, struct strbuf *err, void *cb,
+			 void *task_cb)
+{
+	struct testsuite *suite = cb;
+	const char *name = (const char *)task_cb;
+
+	if (result)
+		string_list_append(&suite->failed, name);
+
+	strbuf_addf(err, "%s: '%s'\n", result ? "FAIL" : "SUCCESS", name);
+
+	return 0;
+}
+
+static int test_failed(struct strbuf *out, void *cb, void *task_cb)
+{
+	struct testsuite *suite = cb;
+	const char *name = (const char *)task_cb;
+
+	string_list_append(&suite->failed, name);
+	strbuf_addf(out, "FAILED TO START: '%s'\n", name);
+
+	return 0;
+}
+
+static const char * const testsuite_usage[] = {
+	"test-run-command testsuite [<options>] [<pattern>...]",
+	NULL
+};
+
+static int testsuite(int argc, const char **argv)
+{
+	struct testsuite suite = TESTSUITE_INIT;
+	int max_jobs = 1, i, ret;
+	DIR *dir;
+	struct dirent *d;
+	struct option options[] = {
+		OPT_BOOL('i', "immediate", &suite.immediate,
+			 "stop at first failed test case(s)"),
+		OPT_INTEGER('j', "jobs", &max_jobs, "run <N> jobs in parallel"),
+		OPT_BOOL('q', "quiet", &suite.quiet, "be terse"),
+		OPT_BOOL('v', "verbose", &suite.verbose, "be verbose"),
+		OPT_BOOL('V', "verbose-log", &suite.verbose_log,
+			 "be verbose, redirected to a file"),
+		OPT_BOOL('x', "trace", &suite.trace, "trace shell commands"),
+		OPT_BOOL(0, "write-junit-xml", &suite.write_junit_xml,
+			 "write JUnit-style XML files"),
+		OPT_END()
+	};
+
+	memset(&suite, 0, sizeof(suite));
+	suite.tests.strdup_strings = suite.failed.strdup_strings = 1;
+
+	argc = parse_options(argc, argv, NULL, options,
+			testsuite_usage, PARSE_OPT_STOP_AT_NON_OPTION);
+
+	if (max_jobs <= 0)
+		max_jobs = online_cpus();
+
+	dir = opendir(".");
+	if (!dir)
+		die("Could not open the current directory");
+	while ((d = readdir(dir))) {
+		const char *p = d->d_name;
+
+		if (*p != 't' || !isdigit(p[1]) || !isdigit(p[2]) ||
+		    !isdigit(p[3]) || !isdigit(p[4]) || p[5] != '-' ||
+		    !ends_with(p, ".sh"))
+			continue;
+
+		/* No pattern: match all */
+		if (!argc) {
+			string_list_append(&suite.tests, p);
+			continue;
+		}
+
+		for (i = 0; i < argc; i++)
+			if (!wildmatch(argv[i], p, 0)) {
+				string_list_append(&suite.tests, p);
+				break;
+			}
+	}
+	closedir(dir);
+
+	if (!suite.tests.nr)
+		die("No tests match!");
+	if (max_jobs > suite.tests.nr)
+		max_jobs = suite.tests.nr;
+
+	fprintf(stderr, "Running %d tests (%d at a time)\n",
+		suite.tests.nr, max_jobs);
+
+	ret = run_processes_parallel(max_jobs, next_test, test_failed,
+				     test_finished, &suite);
+
+	if (suite.failed.nr > 0) {
+		ret = 1;
+		fprintf(stderr, "%d tests failed:\n\n", suite.failed.nr);
+		for (i = 0; i < suite.failed.nr; i++)
+			fprintf(stderr, "\t%s\n", suite.failed.items[i].string);
+	}
+
+	string_list_clear(&suite.tests, 0);
+	string_list_clear(&suite.failed, 0);
+
+	return !!ret;
+}
+
 int cmd__run_command(int argc, const char **argv)
 {
 	struct child_process proc = CHILD_PROCESS_INIT;
 	int jobs;
 
+	if (argc > 1 && !strcmp(argv[1], "testsuite"))
+		exit(testsuite(argc - 1, argv + 1));
+
 	if (argc < 3)
 		return 1;
 	while (!strcmp(argv[1], "env")) {
diff --git a/t/t0000-basic.sh b/t/t0000-basic.sh
index 4c01f60..4d3f7ba 100755
--- a/t/t0000-basic.sh
+++ b/t/t0000-basic.sh
@@ -391,6 +391,44 @@
 	)
 "
 
+test_expect_success 'GIT_SKIP_TESTS entire suite' "
+	(
+		GIT_SKIP_TESTS='git' && export GIT_SKIP_TESTS &&
+		run_sub_test_lib_test git-skip-tests-entire-suite \
+			'GIT_SKIP_TESTS entire suite' <<-\\EOF &&
+		for i in 1 2 3
+		do
+			test_expect_success \"passing test #\$i\" 'true'
+		done
+		test_done
+		EOF
+		check_sub_test_lib_test git-skip-tests-entire-suite <<-\\EOF
+		> 1..0 # SKIP skip all tests in git
+		EOF
+	)
+"
+
+test_expect_success 'GIT_SKIP_TESTS does not skip unmatched suite' "
+	(
+		GIT_SKIP_TESTS='notgit' && export GIT_SKIP_TESTS &&
+		run_sub_test_lib_test git-skip-tests-unmatched-suite \
+			'GIT_SKIP_TESTS does not skip unmatched suite' <<-\\EOF &&
+		for i in 1 2 3
+		do
+			test_expect_success \"passing test #\$i\" 'true'
+		done
+		test_done
+		EOF
+		check_sub_test_lib_test git-skip-tests-unmatched-suite <<-\\EOF
+		> ok 1 - passing test #1
+		> ok 2 - passing test #2
+		> ok 3 - passing test #3
+		> # passed all 3 test(s)
+		> 1..3
+		EOF
+	)
+"
+
 test_expect_success '--run basic' "
 	run_sub_test_lib_test run-basic \
 		'--run basic' --run='1 3 5' <<-\\EOF &&
diff --git a/t/t0014-alias.sh b/t/t0014-alias.sh
index a070e64..2694c81 100755
--- a/t/t0014-alias.sh
+++ b/t/t0014-alias.sh
@@ -37,4 +37,11 @@
 #	test_i18ngrep "^fatal: alias loop detected: expansion of" output
 #'
 
+test_expect_success 'run-command formats empty args properly' '
+    GIT_TRACE=1 git frotz a "" b " " c 2>&1 |
+    sed -ne "/run_command:/s/.*trace: run_command: //p" >actual &&
+    echo "git-frotz a '\'''\'' b '\'' '\'' c" >expect &&
+    test_cmp expect actual
+'
+
 test_done
diff --git a/t/t0028-working-tree-encoding.sh b/t/t0028-working-tree-encoding.sh
index 1090e65..7aa0945 100755
--- a/t/t0028-working-tree-encoding.sh
+++ b/t/t0028-working-tree-encoding.sh
@@ -40,7 +40,7 @@
 	printf "$text" | write_utf16 >test.utf16.raw &&
 	printf "$text" | write_utf32 >test.utf32.raw &&
 	printf "\377\376"                         >test.utf16lebom.raw &&
-	printf "$text" | iconv -f UTF-8 -t UTF-32LE >>test.utf16lebom.raw &&
+	printf "$text" | iconv -f UTF-8 -t UTF-16LE >>test.utf16lebom.raw &&
 
 	# Line ending tests
 	printf "one\ntwo\nthree\n" >lf.utf8.raw &&
@@ -280,4 +280,43 @@
 	git reset
 '
 
+# $1: checkout encoding
+# $2: test string
+# $3: binary test string in checkout encoding
+test_commit_utf8_checkout_other () {
+	encoding="$1"
+	orig_string="$2"
+	expect_bytes="$3"
+
+	test_expect_success "Commit UTF-8, checkout $encoding" '
+		test_when_finished "git checkout HEAD -- .gitattributes" &&
+
+		test_ext="commit_utf8_checkout_$encoding" &&
+		test_file="test.$test_ext" &&
+
+		# Commit as UTF-8
+		echo "*.$test_ext text working-tree-encoding=UTF-8" >.gitattributes &&
+		printf "$orig_string" >$test_file &&
+		git add $test_file &&
+		git commit -m "Test data" &&
+
+		# Checkout in tested encoding
+		rm $test_file &&
+		echo "*.$test_ext text working-tree-encoding=$encoding" >.gitattributes &&
+		git checkout HEAD -- $test_file &&
+
+		# Test
+		printf $expect_bytes >$test_file.raw &&
+		test_cmp_bin $test_file.raw $test_file
+	'
+}
+
+test_commit_utf8_checkout_other "UTF-8"        "Test Тест" "\124\145\163\164\040\320\242\320\265\321\201\321\202"
+test_commit_utf8_checkout_other "UTF-16LE"     "Test Тест" "\124\000\145\000\163\000\164\000\040\000\042\004\065\004\101\004\102\004"
+test_commit_utf8_checkout_other "UTF-16BE"     "Test Тест" "\000\124\000\145\000\163\000\164\000\040\004\042\004\065\004\101\004\102"
+test_commit_utf8_checkout_other "UTF-16LE-BOM" "Test Тест" "\377\376\124\000\145\000\163\000\164\000\040\000\042\004\065\004\101\004\102\004"
+test_commit_utf8_checkout_other "UTF-16BE-BOM" "Test Тест" "\376\377\000\124\000\145\000\163\000\164\000\040\004\042\004\065\004\101\004\102"
+test_commit_utf8_checkout_other "UTF-32LE"     "Test Тест" "\124\000\000\000\145\000\000\000\163\000\000\000\164\000\000\000\040\000\000\000\042\004\000\000\065\004\000\000\101\004\000\000\102\004\000\000"
+test_commit_utf8_checkout_other "UTF-32BE"     "Test Тест" "\000\000\000\124\000\000\000\145\000\000\000\163\000\000\000\164\000\000\000\040\000\000\004\042\000\000\004\065\000\000\004\101\000\000\004\102"
+
 test_done
diff --git a/t/t0050-filesystem.sh b/t/t0050-filesystem.sh
index 192c94e..608673f 100755
--- a/t/t0050-filesystem.sh
+++ b/t/t0050-filesystem.sh
@@ -131,4 +131,24 @@
 	git merge topic
 '
 
+test_expect_success CASE_INSENSITIVE_FS 'checkout with no pathspec and a case insensitive fs' '
+	git init repo &&
+	(
+		cd repo &&
+
+		>Gitweb &&
+		git add Gitweb &&
+		git commit -m "add Gitweb" &&
+
+		git checkout --orphan todo &&
+		git reset --hard &&
+		mkdir -p gitweb/subdir &&
+		>gitweb/subdir/file &&
+		git add gitweb &&
+		git commit -m "add gitweb/subdir/file" &&
+
+		git checkout master
+	)
+'
+
 test_done
diff --git a/t/t0061-run-command.sh b/t/t0061-run-command.sh
index 015fac8..17c9c0f 100755
--- a/t/t0061-run-command.sh
+++ b/t/t0061-run-command.sh
@@ -210,10 +210,23 @@
 	test_cmp expect actual
 '
 
-test_expect_success MINGW 'can spawn with argv[0] containing spaces' '
-	cp "$GIT_BUILD_DIR/t/helper/test-fake-ssh$X" ./ &&
-	test_must_fail "$PWD/test-fake-ssh$X" 2>err &&
-	grep TRASH_DIRECTORY err
+test_expect_success MINGW 'can spawn .bat with argv[0] containing spaces' '
+	bat="$TRASH_DIRECTORY/bat with spaces in name.bat" &&
+
+	# Every .bat invocation will log its arguments to file "out"
+	rm -f out &&
+	echo "echo %* >>out" >"$bat" &&
+
+	# Ask git to invoke .bat; clone will fail due to fake SSH helper
+	test_must_fail env GIT_SSH="$bat" git clone myhost:src ssh-clone &&
+
+	# Spawning .bat can fail if there are two quoted cmd.exe arguments.
+	# .bat itself is first (due to spaces in name), so just one more is
+	# needed to verify. GIT_SSH will invoke .bat multiple times:
+	# 1) -G myhost
+	# 2) myhost "git-upload-pack src"
+	# First invocation will always succeed. Test the second one.
+	grep "git-upload-pack" out
 '
 
 test_done
diff --git a/t/t0212-trace2-event.sh b/t/t0212-trace2-event.sh
index ff5b9cc..7065a1b 100755
--- a/t/t0212-trace2-event.sh
+++ b/t/t0212-trace2-event.sh
@@ -265,4 +265,23 @@
 	test_cmp expect actual
 '
 
+test_expect_success 'discard traces when there are too many files' '
+	mkdir trace_target_dir &&
+	test_when_finished "rm -r trace_target_dir" &&
+	(
+		GIT_TRACE2_MAX_FILES=5 &&
+		export GIT_TRACE2_MAX_FILES &&
+		cd trace_target_dir &&
+		test_seq $GIT_TRACE2_MAX_FILES >../expected_filenames.txt &&
+		xargs touch <../expected_filenames.txt &&
+		cd .. &&
+		GIT_TRACE2_EVENT="$(pwd)/trace_target_dir" test-tool trace2 001return 0
+	) &&
+	echo git-trace2-discard >>expected_filenames.txt &&
+	ls trace_target_dir >ls_output.txt &&
+	test_cmp expected_filenames.txt ls_output.txt &&
+	head -n1 trace_target_dir/git-trace2-discard | grep \"event\":\"version\" &&
+	head -n2 trace_target_dir/git-trace2-discard | tail -n1 | grep \"event\":\"too_many_files\"
+'
+
 test_done
diff --git a/t/t1450-fsck.sh b/t/t1450-fsck.sh
index b36e052..50d28e6 100755
--- a/t/t1450-fsck.sh
+++ b/t/t1450-fsck.sh
@@ -70,7 +70,6 @@
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "$sha.*corrupt" out
 '
 
@@ -78,7 +77,6 @@
 	git rev-parse HEAD^{tree} >.git/refs/heads/invalid &&
 	test_when_finished "git update-ref -d refs/heads/invalid" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "not a commit" out
 '
 
@@ -88,7 +86,6 @@
 	echo $ZERO_OID >.git/HEAD &&
 	# avoid corrupt/broken HEAD from interfering with repo discovery
 	test_must_fail env GIT_DIR=.git git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "detached HEAD points" out
 '
 
@@ -98,7 +95,6 @@
 	echo "ref: refs/funny/place" >.git/HEAD &&
 	# avoid corrupt/broken HEAD from interfering with repo discovery
 	test_must_fail env GIT_DIR=.git git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "HEAD points to something strange" out
 '
 
@@ -157,7 +153,6 @@
 	git update-ref refs/heads/bogus "$new" &&
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "error in commit $new" out
 '
 
@@ -169,7 +164,6 @@
 	git update-ref refs/heads/bogus "$new" &&
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "error in commit $new.* - bad name" out
 '
 
@@ -181,7 +175,6 @@
 	git update-ref refs/heads/bogus "$new" &&
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "error in commit $new.* - missing email" out
 '
 
@@ -193,7 +186,6 @@
 	git update-ref refs/heads/bogus "$new" &&
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "error in commit $new" out
 '
 
@@ -207,7 +199,6 @@
 	git update-ref refs/heads/bogus "$new" &&
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "error in commit $new.*integer overflow" out
 '
 
@@ -219,7 +210,6 @@
 	git update-ref refs/heads/bogus "$new" &&
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 	test_must_fail git fsck 2>out &&
-	cat out &&
 	test_i18ngrep "error in commit $new.*unterminated header: NUL at offset" out
 '
 
@@ -297,7 +287,6 @@
 	echo $tag >.git/refs/tags/invalid &&
 	test_when_finished "git update-ref -d refs/tags/invalid" &&
 	test_must_fail git fsck --tags >out &&
-	cat out &&
 	test_i18ngrep "broken link" out
 '
 
@@ -378,7 +367,6 @@
 	echo $tag >.git/refs/tags/wrong &&
 	test_when_finished "git update-ref -d refs/tags/wrong" &&
 	test_must_fail git fsck --tags 2>out &&
-	cat out &&
 	test_i18ngrep "error in tag $tag.*unterminated header: NUL at offset" out
 '
 
@@ -409,7 +397,6 @@
 	test_when_finished "git update-ref -d refs/heads/bogus" &&
 
 	test_might_fail git rev-list --verify-objects refs/heads/bogus >/dev/null 2>out &&
-	cat out &&
 	test_i18ngrep -q "error: hash mismatch $(dirname $new)$(test_oid ff_2)" out
 '
 
@@ -433,7 +420,6 @@
 	 sha=$(printf "100644 file$_bz$_bzoid" |
 	       git hash-object -w --stdin -t tree) &&
 	  git fsck 2>out &&
-	  cat out &&
 	  test_i18ngrep "warning.*null sha1" out
 	)
 '
@@ -444,7 +430,6 @@
 	 sha=$(printf "160000 submodule$_bz$_bzoid" |
 	       git hash-object -w --stdin -t tree) &&
 	  git fsck 2>out &&
-	  cat out &&
 	  test_i18ngrep "warning.*null sha1" out
 	)
 '
@@ -465,7 +450,6 @@
 			printf "$mode $type %s\t%s" "$value" "$path" >bad &&
 			bad_tree=$(git mktree <bad) &&
 			git fsck 2>out &&
-			cat out &&
 			test_i18ngrep "warning.*tree $bad_tree" out
 		)'
 	done <<-\EOF
diff --git a/t/t3030-merge-recursive.sh b/t/t3030-merge-recursive.sh
index faa8892..2170758 100755
--- a/t/t3030-merge-recursive.sh
+++ b/t/t3030-merge-recursive.sh
@@ -695,15 +695,22 @@
 test_expect_success 'merge-recursive remembers the names of all base trees' '
 	git reset --hard HEAD &&
 
+	# make the index match $c1 so that merge-recursive below does not
+	# fail early
+	git diff --binary HEAD $c1 -- | git apply --cached &&
+
 	# more trees than static slots used by oid_to_hex()
 	for commit in $c0 $c2 $c4 $c5 $c6 $c7
 	do
 		git rev-parse "$commit^{tree}"
 	done >trees &&
 
-	# ignore the return code -- it only fails because the input is weird
+	# ignore the return code; it only fails because the input is weird...
 	test_must_fail git -c merge.verbosity=5 merge-recursive $(cat trees) -- $c1 $c3 >out &&
 
+	# ...but make sure it fails in the expected way
+	test_i18ngrep CONFLICT.*rename/rename out &&
+
 	# merge-recursive prints in reverse order, but we do not care
 	sort <trees >expect &&
 	sed -n "s/^virtual //p" out | sort >actual &&
diff --git a/t/t3206-range-diff.sh b/t/t3206-range-diff.sh
index 0120f76..0579cd9 100755
--- a/t/t3206-range-diff.sh
+++ b/t/t3206-range-diff.sh
@@ -333,6 +333,46 @@
 	test_cmp expected actual
 '
 
+test_expect_success 'file with mode only change' '
+	git range-diff --no-color --submodule=log topic...mode-only-change >actual &&
+	sed s/Z/\ /g >expected <<-EOF &&
+	1:  fccce22 ! 1:  4d39cb3 s/4/A/
+	    @@ Metadata
+	    ZAuthor: Thomas Rast <trast@inf.ethz.ch>
+	    Z
+	    Z ## Commit message ##
+	    -    s/4/A/
+	    +    s/4/A/ + add other-file
+	    Z
+	    Z ## file ##
+	    Z@@
+	    @@ file
+	    Z A
+	    Z 6
+	    Z 7
+	    +
+	    + ## other-file (new) ##
+	2:  147e64e ! 2:  26c107f s/11/B/
+	    @@ Metadata
+	    ZAuthor: Thomas Rast <trast@inf.ethz.ch>
+	    Z
+	    Z ## Commit message ##
+	    -    s/11/B/
+	    +    s/11/B/ + mode change other-file
+	    Z
+	    Z ## file ##
+	    Z@@ file: A
+	    @@ file: A
+	    Z 12
+	    Z 13
+	    Z 14
+	    +
+	    + ## other-file (mode change 100644 => 100755) ##
+	3:  a63e992 = 3:  4c1e0f5 s/12/B/
+	EOF
+	test_cmp expected actual
+'
+
 test_expect_success 'file added and later removed' '
 	git range-diff --no-color --submodule=log topic...added-removed >actual &&
 	sed s/Z/\ /g >expected <<-EOF &&
@@ -461,4 +501,8 @@
 	grep "> 1: .* new message" 0001-*
 '
 
+test_expect_success 'range-diff overrides diff.noprefix internally' '
+	git -c diff.noprefix=true range-diff HEAD^...
+'
+
 test_done
diff --git a/t/t3206/history.export b/t/t3206/history.export
index 7bb3814..4c808e5 100644
--- a/t/t3206/history.export
+++ b/t/t3206/history.export
@@ -55,7 +55,7 @@
 19
 20
 
-commit refs/heads/topic
+commit refs/heads/mode-only-change
 mark :4
 author Thomas Rast <trast@inf.ethz.ch> 1374485014 +0200
 committer Thomas Rast <trast@inf.ethz.ch> 1374485014 +0200
@@ -678,3 +678,32 @@
 from :55
 M 100644 :9 renamed-file
 
+commit refs/heads/mode-only-change
+mark :57
+author Thomas Rast <trast@inf.ethz.ch> 1374485024 +0200
+committer Thomas Gummerer <t.gummerer@gmail.com> 1570473767 +0100
+data 24
+s/4/A/ + add other-file
+from :4
+M 100644 :5 file
+M 100644 :49 other-file
+
+commit refs/heads/mode-only-change
+mark :58
+author Thomas Rast <trast@inf.ethz.ch> 1374485036 +0200
+committer Thomas Gummerer <t.gummerer@gmail.com> 1570473768 +0100
+data 33
+s/11/B/ + mode change other-file
+from :57
+M 100644 :7 file
+M 100755 :49 other-file
+
+commit refs/heads/mode-only-change
+mark :59
+author Thomas Rast <trast@inf.ethz.ch> 1374485044 +0200
+committer Thomas Gummerer <t.gummerer@gmail.com> 1570473768 +0100
+data 8
+s/12/B/
+from :58
+M 100644 :9 file
+
diff --git a/t/t3404-rebase-interactive.sh b/t/t3404-rebase-interactive.sh
index 29a3584..d2dfbe4 100755
--- a/t/t3404-rebase-interactive.sh
+++ b/t/t3404-rebase-interactive.sh
@@ -1014,9 +1014,9 @@
 	test 0 = $(git cat-file commit HEAD | grep -c ^parent\ )
 '
 
-test_expect_success 'rebase -i --root reword root commit' '
+test_expect_success 'rebase -i --root reword original root commit' '
 	test_when_finished "test_might_fail git rebase --abort" &&
-	git checkout -b reword-root-branch master &&
+	git checkout -b reword-original-root-branch master &&
 	set_fake_editor &&
 	FAKE_LINES="reword 1 2" FAKE_COMMIT_MESSAGE="A changed" \
 	git rebase -i --root &&
@@ -1024,6 +1024,16 @@
 	test -z "$(git show -s --format=%p HEAD^)"
 '
 
+test_expect_success 'rebase -i --root reword new root commit' '
+	test_when_finished "test_might_fail git rebase --abort" &&
+	git checkout -b reword-now-root-branch master &&
+	set_fake_editor &&
+	FAKE_LINES="reword 3 1" FAKE_COMMIT_MESSAGE="C changed" \
+	git rebase -i --root &&
+	git show HEAD^ | grep "C changed" &&
+	test -z "$(git show -s --format=%p HEAD^)"
+'
+
 test_expect_success 'rebase -i --root when root has untracked file conflict' '
 	test_when_finished "reset_rebase" &&
 	git checkout -b failing-root-pick A &&
@@ -1052,7 +1062,7 @@
 '
 
 test_expect_success C_LOCALE_OUTPUT 'rebase --edit-todo does not work on non-interactive rebase' '
-	git checkout reword-root-branch &&
+	git checkout reword-original-root-branch &&
 	git reset --hard &&
 	git checkout conflict-branch &&
 	set_fake_editor &&
diff --git a/t/t3429-rebase-edit-todo.sh b/t/t3429-rebase-edit-todo.sh
index 76f6d30..8739cb6 100755
--- a/t/t3429-rebase-edit-todo.sh
+++ b/t/t3429-rebase-edit-todo.sh
@@ -3,9 +3,15 @@
 test_description='rebase should reread the todo file if an exec modifies it'
 
 . ./test-lib.sh
+. "$TEST_DIRECTORY"/lib-rebase.sh
+
+test_expect_success 'setup' '
+	test_commit first file &&
+	test_commit second file &&
+	test_commit third file
+'
 
 test_expect_success 'rebase exec modifies rebase-todo' '
-	test_commit initial &&
 	todo=.git/rebase-merge/git-rebase-todo &&
 	git rebase HEAD -x "echo exec touch F >>$todo" &&
 	test -e F
@@ -33,4 +39,17 @@
 	git rebase HEAD -x "./append-todo.sh 5 6"
 '
 
+test_expect_success 'todo is re-read after reword and squash' '
+	write_script reword-editor.sh <<-\EOS &&
+	GIT_SEQUENCE_EDITOR="echo \"exec echo $(cat file) >>actual\" >>" \
+		git rebase --edit-todo
+	EOS
+
+	test_write_lines first third >expected &&
+	set_fake_editor &&
+	GIT_SEQUENCE_EDITOR="$EDITOR" FAKE_LINES="reword 1 squash 2 fixup 3" \
+		GIT_EDITOR=./reword-editor.sh git rebase -i --root third &&
+	test_cmp expected actual
+'
+
 test_done
diff --git a/t/t3701-add-interactive.sh b/t/t3701-add-interactive.sh
index 69991a3..d50e165 100755
--- a/t/t3701-add-interactive.sh
+++ b/t/t3701-add-interactive.sh
@@ -314,7 +314,7 @@
 	git commit -am "clear local changes" &&
 	git apply patch &&
 	printf "%s\n" s y y | git add -p file 2>error |
-		sed -n -e "s/^Stage this hunk[^@]*\(@@ .*\)/\1/" \
+		sed -n -e "s/^([1-2]\/[1-2]) Stage this hunk[^@]*\(@@ .*\)/\1/" \
 		       -e "/^[-+@ \\\\]"/p  >output &&
 	test_must_be_empty error &&
 	git diff --cached >diff &&
diff --git a/t/t3908-stash-in-worktree.sh b/t/t3908-stash-in-worktree.sh
new file mode 100755
index 0000000..2b2b366
--- /dev/null
+++ b/t/t3908-stash-in-worktree.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+#
+# Copyright (c) 2019 Johannes E Schindelin
+#
+
+test_description='Test git stash in a worktree'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' '
+	test_commit initial &&
+	git worktree add wt &&
+	test_commit -C wt in-worktree
+'
+
+test_expect_success 'apply in subdirectory' '
+	mkdir wt/subdir &&
+	(
+		cd wt/subdir &&
+		echo modified >../initial.t &&
+		git stash &&
+		git stash apply >out
+	) &&
+	grep "\.\.\/initial\.t" wt/subdir/out
+'
+
+test_done
diff --git a/t/t4014-format-patch.sh b/t/t4014-format-patch.sh
index 83f5261..72b0989 100755
--- a/t/t4014-format-patch.sh
+++ b/t/t4014-format-patch.sh
@@ -1610,8 +1610,9 @@
 	test_config format.outputDirectory patches &&
 	rm -fr patches &&
 	git format-patch master..side &&
-	git rev-list master..side >list &&
-	test_line_count = $(ls patches | wc -l) list
+	count=$(git rev-list --count master..side) &&
+	ls patches >list &&
+	test_line_count = $count list
 '
 
 test_expect_success 'format-patch -o overrides format.outputDirectory' '
diff --git a/t/t4210-log-i18n.sh b/t/t4210-log-i18n.sh
index 7c51943..6e61f57 100755
--- a/t/t4210-log-i18n.sh
+++ b/t/t4210-log-i18n.sh
@@ -1,12 +1,15 @@
 #!/bin/sh
 
 test_description='test log with i18n features'
-. ./test-lib.sh
+. ./lib-gettext.sh
 
 # two forms of é
 utf8_e=$(printf '\303\251')
 latin1_e=$(printf '\351')
 
+# invalid UTF-8
+invalid_e=$(printf '\303\50)') # ")" at end to close opening "("
+
 test_expect_success 'create commits in different encodings' '
 	test_tick &&
 	cat >msg <<-EOF &&
@@ -48,9 +51,43 @@
 	test_must_be_empty actual
 '
 
-test_expect_success 'log --grep does not find non-reencoded values (latin1)' '
+test_expect_success !MINGW 'log --grep does not find non-reencoded values (latin1)' '
 	git log --encoding=ISO-8859-1 --format=%s --grep=$utf8_e >actual &&
 	test_must_be_empty actual
 '
 
+for engine in fixed basic extended perl
+do
+	prereq=
+	if test $engine = "perl"
+	then
+		prereq="PCRE"
+	else
+		prereq=""
+	fi
+	force_regex=
+	if test $engine != "fixed"
+	then
+	    force_regex=.*
+	fi
+	test_expect_success !MINGW,GETTEXT_LOCALE,$prereq "-c grep.patternType=$engine log --grep does not find non-reencoded values (latin1 + locale)" "
+		cat >expect <<-\EOF &&
+		latin1
+		utf8
+		EOF
+		LC_ALL=\"$is_IS_locale\" git -c grep.patternType=$engine log --encoding=ISO-8859-1 --format=%s --grep=\"$force_regex$latin1_e\" >actual &&
+		test_cmp expect actual
+	"
+
+	test_expect_success !MINGW,GETTEXT_LOCALE,$prereq "-c grep.patternType=$engine log --grep does not find non-reencoded values (latin1 + locale)" "
+		LC_ALL=\"$is_IS_locale\" git -c grep.patternType=$engine log --encoding=ISO-8859-1 --format=%s --grep=\"$force_regex$utf8_e\" >actual &&
+		test_must_be_empty actual
+	"
+
+	test_expect_success !MINGW,GETTEXT_LOCALE,$prereq "-c grep.patternType=$engine log --grep does not die on invalid UTF-8 value (latin1 + locale + invalid needle)" "
+		LC_ALL=\"$is_IS_locale\" git -c grep.patternType=$engine log --encoding=ISO-8859-1 --format=%s --grep=\"$force_regex$invalid_e\" >actual &&
+		test_must_be_empty actual
+	"
+done
+
 test_done
diff --git a/t/t4214-log-graph-octopus.sh b/t/t4214-log-graph-octopus.sh
index dab96c8..3ae8e51 100755
--- a/t/t4214-log-graph-octopus.sh
+++ b/t/t4214-log-graph-octopus.sh
@@ -5,6 +5,25 @@
 . ./test-lib.sh
 
 test_expect_success 'set up merge history' '
+	test_commit initial &&
+	for i in 1 2 3 4 ; do
+		git checkout master -b $i || return $?
+		# Make tag name different from branch name, to avoid
+		# ambiguity error when calling checkout.
+		test_commit $i $i $i tag$i || return $?
+	done &&
+	git checkout 1 -b merge &&
+	test_merge octopus-merge 1 2 3 4 &&
+	test_commit after-merge &&
+	git checkout 1 -b L &&
+	test_commit left &&
+	git checkout 4 -b crossover &&
+	test_commit after-4 &&
+	git checkout initial -b more-L &&
+	test_commit after-initial
+'
+
+test_expect_success 'log --graph with tricky octopus merge, no color' '
 	cat >expect.uncolored <<-\EOF &&
 	* left
 	| *---.   octopus-merge
@@ -19,6 +38,13 @@
 	|/
 	* initial
 	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s left octopus-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_success 'log --graph with tricky octopus merge with colors' '
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
 	cat >expect.colors <<-\EOF &&
 	* left
 	<RED>|<RESET> *<BLUE>-<RESET><BLUE>-<RESET><MAGENTA>-<RESET><MAGENTA>.<RESET>   octopus-merge
@@ -33,33 +59,11 @@
 	<MAGENTA>|<RESET><MAGENTA>/<RESET>
 	* initial
 	EOF
-	test_commit initial &&
-	for i in 1 2 3 4 ; do
-		git checkout master -b $i || return $?
-		# Make tag name different from branch name, to avoid
-		# ambiguity error when calling checkout.
-		test_commit $i $i $i tag$i || return $?
-	done &&
-	git checkout 1 -b merge &&
-	test_tick &&
-	git merge -m octopus-merge 1 2 3 4 &&
-	git checkout 1 -b L &&
-	test_commit left
-'
-
-test_expect_success 'log --graph with tricky octopus merge with colors' '
-	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
-	git log --color=always --graph --date-order --pretty=tformat:%s --all >actual.colors.raw &&
+	git log --color=always --graph --date-order --pretty=tformat:%s left octopus-merge >actual.colors.raw &&
 	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
 	test_cmp expect.colors actual.colors
 '
 
-test_expect_success 'log --graph with tricky octopus merge, no color' '
-	git log --color=never --graph --date-order --pretty=tformat:%s --all >actual.raw &&
-	sed "s/ *\$//" actual.raw >actual &&
-	test_cmp expect.uncolored actual
-'
-
 # Repeat the previous two tests with "normal" octopus merge (i.e.,
 # without the first parent skewing to the "left" branch column).
 
@@ -76,7 +80,7 @@
 	|/
 	* initial
 	EOF
-	git log --color=never --graph --date-order --pretty=tformat:%s merge >actual.raw &&
+	git log --color=never --graph --date-order --pretty=tformat:%s octopus-merge >actual.raw &&
 	sed "s/ *\$//" actual.raw >actual &&
 	test_cmp expect.uncolored actual
 '
@@ -95,8 +99,283 @@
 	* initial
 	EOF
 	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
-	git log --color=always --graph --date-order --pretty=tformat:%s merge >actual.colors.raw &&
+	git log --color=always --graph --date-order --pretty=tformat:%s octopus-merge >actual.colors.raw &&
 	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
 	test_cmp expect.colors actual.colors
 '
+
+test_expect_success 'log --graph with normal octopus merge and child, no color' '
+	cat >expect.uncolored <<-\EOF &&
+	* after-merge
+	*---.   octopus-merge
+	|\ \ \
+	| | | * 4
+	| | * | 3
+	| | |/
+	| * | 2
+	| |/
+	* | 1
+	|/
+	* initial
+	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s after-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_failure 'log --graph with normal octopus and child merge with colors' '
+	cat >expect.colors <<-\EOF &&
+	* after-merge
+	*<BLUE>-<RESET><BLUE>-<RESET><MAGENTA>-<RESET><MAGENTA>.<RESET>   octopus-merge
+	<GREEN>|<RESET><YELLOW>\<RESET> <BLUE>\<RESET> <MAGENTA>\<RESET>
+	<GREEN>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> * 4
+	<GREEN>|<RESET> <YELLOW>|<RESET> * <MAGENTA>|<RESET> 3
+	<GREEN>|<RESET> <YELLOW>|<RESET> <MAGENTA>|<RESET><MAGENTA>/<RESET>
+	<GREEN>|<RESET> * <MAGENTA>|<RESET> 2
+	<GREEN>|<RESET> <MAGENTA>|<RESET><MAGENTA>/<RESET>
+	* <MAGENTA>|<RESET> 1
+	<MAGENTA>|<RESET><MAGENTA>/<RESET>
+	* initial
+	EOF
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+	git log --color=always --graph --date-order --pretty=tformat:%s after-merge >actual.colors.raw &&
+	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+	test_cmp expect.colors actual.colors
+'
+
+test_expect_success 'log --graph with tricky octopus merge and its child, no color' '
+	cat >expect.uncolored <<-\EOF &&
+	* left
+	| * after-merge
+	| *---.   octopus-merge
+	| |\ \ \
+	|/ / / /
+	| | | * 4
+	| | * | 3
+	| | |/
+	| * | 2
+	| |/
+	* | 1
+	|/
+	* initial
+	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s left after-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_failure 'log --graph with tricky octopus merge and its child with colors' '
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+	cat >expect.colors <<-\EOF &&
+	* left
+	<RED>|<RESET> * after-merge
+	<RED>|<RESET> *<MAGENTA>-<RESET><MAGENTA>-<RESET><CYAN>-<RESET><CYAN>.<RESET>   octopus-merge
+	<RED>|<RESET> <RED>|<RESET><BLUE>\<RESET> <MAGENTA>\<RESET> <CYAN>\<RESET>
+	<RED>|<RESET><RED>/<RESET> <BLUE>/<RESET> <MAGENTA>/<RESET> <CYAN>/<RESET>
+	<RED>|<RESET> <BLUE>|<RESET> <MAGENTA>|<RESET> * 4
+	<RED>|<RESET> <BLUE>|<RESET> * <CYAN>|<RESET> 3
+	<RED>|<RESET> <BLUE>|<RESET> <CYAN>|<RESET><CYAN>/<RESET>
+	<RED>|<RESET> * <CYAN>|<RESET> 2
+	<RED>|<RESET> <CYAN>|<RESET><CYAN>/<RESET>
+	* <CYAN>|<RESET> 1
+	<CYAN>|<RESET><CYAN>/<RESET>
+	* initial
+	EOF
+	git log --color=always --graph --date-order --pretty=tformat:%s left after-merge >actual.colors.raw &&
+	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+	test_cmp expect.colors actual.colors
+'
+
+test_expect_success 'log --graph with crossover in octopus merge, no color' '
+	cat >expect.uncolored <<-\EOF &&
+	* after-4
+	| *---.   octopus-merge
+	| |\ \ \
+	| |_|_|/
+	|/| | |
+	* | | | 4
+	| | | * 3
+	| |_|/
+	|/| |
+	| | * 2
+	| |/
+	|/|
+	| * 1
+	|/
+	* initial
+	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s after-4 octopus-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_failure 'log --graph with crossover in octopus merge with colors' '
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+	cat >expect.colors <<-\EOF &&
+	* after-4
+	<RED>|<RESET> *<BLUE>-<RESET><BLUE>-<RESET><RED>-<RESET><RED>.<RESET>   octopus-merge
+	<RED>|<RESET> <GREEN>|<RESET><YELLOW>\<RESET> <BLUE>\<RESET> <RED>\<RESET>
+	<RED>|<RESET> <GREEN>|<RESET><RED>_<RESET><YELLOW>|<RESET><RED>_<RESET><BLUE>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><GREEN>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET>
+	* <GREEN>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> 4
+	<MAGENTA>|<RESET> <GREEN>|<RESET> <YELLOW>|<RESET> * 3
+	<MAGENTA>|<RESET> <GREEN>|<RESET><MAGENTA>_<RESET><YELLOW>|<RESET><MAGENTA>/<RESET>
+	<MAGENTA>|<RESET><MAGENTA>/<RESET><GREEN>|<RESET> <YELLOW>|<RESET>
+	<MAGENTA>|<RESET> <GREEN>|<RESET> * 2
+	<MAGENTA>|<RESET> <GREEN>|<RESET><MAGENTA>/<RESET>
+	<MAGENTA>|<RESET><MAGENTA>/<RESET><GREEN>|<RESET>
+	<MAGENTA>|<RESET> * 1
+	<MAGENTA>|<RESET><MAGENTA>/<RESET>
+	* initial
+	EOF
+	git log --color=always --graph --date-order --pretty=tformat:%s after-4 octopus-merge >actual.colors.raw &&
+	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+	test_cmp expect.colors actual.colors
+'
+
+test_expect_success 'log --graph with crossover in octopus merge and its child, no color' '
+	cat >expect.uncolored <<-\EOF &&
+	* after-4
+	| * after-merge
+	| *---.   octopus-merge
+	| |\ \ \
+	| |_|_|/
+	|/| | |
+	* | | | 4
+	| | | * 3
+	| |_|/
+	|/| |
+	| | * 2
+	| |/
+	|/|
+	| * 1
+	|/
+	* initial
+	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s after-4 after-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_failure 'log --graph with crossover in octopus merge and its child with colors' '
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+	cat >expect.colors <<-\EOF &&
+	* after-4
+	<RED>|<RESET> * after-merge
+	<RED>|<RESET> *<MAGENTA>-<RESET><MAGENTA>-<RESET><RED>-<RESET><RED>.<RESET>   octopus-merge
+	<RED>|<RESET> <YELLOW>|<RESET><BLUE>\<RESET> <MAGENTA>\<RESET> <RED>\<RESET>
+	<RED>|<RESET> <YELLOW>|<RESET><RED>_<RESET><BLUE>|<RESET><RED>_<RESET><MAGENTA>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><YELLOW>|<RESET> <BLUE>|<RESET> <MAGENTA>|<RESET>
+	* <YELLOW>|<RESET> <BLUE>|<RESET> <MAGENTA>|<RESET> 4
+	<CYAN>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> * 3
+	<CYAN>|<RESET> <YELLOW>|<RESET><CYAN>_<RESET><BLUE>|<RESET><CYAN>/<RESET>
+	<CYAN>|<RESET><CYAN>/<RESET><YELLOW>|<RESET> <BLUE>|<RESET>
+	<CYAN>|<RESET> <YELLOW>|<RESET> * 2
+	<CYAN>|<RESET> <YELLOW>|<RESET><CYAN>/<RESET>
+	<CYAN>|<RESET><CYAN>/<RESET><YELLOW>|<RESET>
+	<CYAN>|<RESET> * 1
+	<CYAN>|<RESET><CYAN>/<RESET>
+	* initial
+	EOF
+	git log --color=always --graph --date-order --pretty=tformat:%s after-4 after-merge >actual.colors.raw &&
+	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+	test_cmp expect.colors actual.colors
+'
+
+test_expect_success 'log --graph with unrelated commit and octopus tip, no color' '
+	cat >expect.uncolored <<-\EOF &&
+	* after-initial
+	| *---.   octopus-merge
+	| |\ \ \
+	| | | | * 4
+	| |_|_|/
+	|/| | |
+	| | | * 3
+	| |_|/
+	|/| |
+	| | * 2
+	| |/
+	|/|
+	| * 1
+	|/
+	* initial
+	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s after-initial octopus-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_success 'log --graph with unrelated commit and octopus tip with colors' '
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+	cat >expect.colors <<-\EOF &&
+	* after-initial
+	<RED>|<RESET> *<BLUE>-<RESET><BLUE>-<RESET><MAGENTA>-<RESET><MAGENTA>.<RESET>   octopus-merge
+	<RED>|<RESET> <GREEN>|<RESET><YELLOW>\<RESET> <BLUE>\<RESET> <MAGENTA>\<RESET>
+	<RED>|<RESET> <GREEN>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> * 4
+	<RED>|<RESET> <GREEN>|<RESET><RED>_<RESET><YELLOW>|<RESET><RED>_<RESET><BLUE>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><GREEN>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET>
+	<RED>|<RESET> <GREEN>|<RESET> <YELLOW>|<RESET> * 3
+	<RED>|<RESET> <GREEN>|<RESET><RED>_<RESET><YELLOW>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><GREEN>|<RESET> <YELLOW>|<RESET>
+	<RED>|<RESET> <GREEN>|<RESET> * 2
+	<RED>|<RESET> <GREEN>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><GREEN>|<RESET>
+	<RED>|<RESET> * 1
+	<RED>|<RESET><RED>/<RESET>
+	* initial
+	EOF
+	git log --color=always --graph --date-order --pretty=tformat:%s after-initial octopus-merge >actual.colors.raw &&
+	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+	test_cmp expect.colors actual.colors
+'
+
+test_expect_success 'log --graph with unrelated commit and octopus child, no color' '
+	cat >expect.uncolored <<-\EOF &&
+	* after-initial
+	| * after-merge
+	| *---.   octopus-merge
+	| |\ \ \
+	| | | | * 4
+	| |_|_|/
+	|/| | |
+	| | | * 3
+	| |_|/
+	|/| |
+	| | * 2
+	| |/
+	|/|
+	| * 1
+	|/
+	* initial
+	EOF
+	git log --color=never --graph --date-order --pretty=tformat:%s after-initial after-merge >actual.raw &&
+	sed "s/ *\$//" actual.raw >actual &&
+	test_cmp expect.uncolored actual
+'
+
+test_expect_failure 'log --graph with unrelated commit and octopus child with colors' '
+	test_config log.graphColors red,green,yellow,blue,magenta,cyan &&
+	cat >expect.colors <<-\EOF &&
+	* after-initial
+	<RED>|<RESET> * after-merge
+	<RED>|<RESET> *<MAGENTA>-<RESET><MAGENTA>-<RESET><CYAN>-<RESET><CYAN>.<RESET>   octopus-merge
+	<RED>|<RESET> <YELLOW>|<RESET><BLUE>\<RESET> <MAGENTA>\<RESET> <CYAN>\<RESET>
+	<RED>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> <MAGENTA>|<RESET> * 4
+	<RED>|<RESET> <YELLOW>|<RESET><RED>_<RESET><BLUE>|<RESET><RED>_<RESET><MAGENTA>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><YELLOW>|<RESET> <BLUE>|<RESET> <MAGENTA>|<RESET>
+	<RED>|<RESET> <YELLOW>|<RESET> <BLUE>|<RESET> * 3
+	<RED>|<RESET> <YELLOW>|<RESET><RED>_<RESET><BLUE>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><YELLOW>|<RESET> <BLUE>|<RESET>
+	<RED>|<RESET> <YELLOW>|<RESET> * 2
+	<RED>|<RESET> <YELLOW>|<RESET><RED>/<RESET>
+	<RED>|<RESET><RED>/<RESET><YELLOW>|<RESET>
+	<RED>|<RESET> * 1
+	<RED>|<RESET><RED>/<RESET>
+	* initial
+	EOF
+	git log --color=always --graph --date-order --pretty=tformat:%s after-initial after-merge >actual.colors.raw &&
+	test_decode_color <actual.colors.raw | sed "s/ *\$//" >actual.colors &&
+	test_cmp expect.colors actual.colors
+'
+
 test_done
diff --git a/t/t5514-fetch-multiple.sh b/t/t5514-fetch-multiple.sh
index 5426d4b..de8e2f1 100755
--- a/t/t5514-fetch-multiple.sh
+++ b/t/t5514-fetch-multiple.sh
@@ -183,4 +183,15 @@
 	test_cmp expect test8/output
 '
 
+test_expect_success 'parallel' '
+	git remote add one ./bogus1 &&
+	git remote add two ./bogus2 &&
+
+	test_must_fail env GIT_TRACE="$PWD/trace" \
+		git fetch --jobs=2 --multiple one two 2>err &&
+	grep "preparing to run up to 2 tasks" trace &&
+	test_i18ngrep "could not fetch .one.*128" err &&
+	test_i18ngrep "could not fetch .two.*128" err
+'
+
 test_done
diff --git a/t/t5801-remote-helpers.sh b/t/t5801-remote-helpers.sh
index 2d6c4a2..121e5c6 100755
--- a/t/t5801-remote-helpers.sh
+++ b/t/t5801-remote-helpers.sh
@@ -247,7 +247,6 @@
 test_expect_success 'proper failure checks for fetching' '
 	(cd local &&
 	test_must_fail env GIT_REMOTE_TESTGIT_FAILURE=1 git fetch 2>error &&
-	cat error &&
 	test_i18ngrep -q "error while running fast-import" error
 	)
 '
diff --git a/t/t6036-recursive-corner-cases.sh b/t/t6036-recursive-corner-cases.sh
index d23b948..7fddcc8 100755
--- a/t/t6036-recursive-corner-cases.sh
+++ b/t/t6036-recursive-corner-cases.sh
@@ -1562,6 +1562,7 @@
 		cd nested_conflicts &&
 
 		git clean -f &&
+		MASTER=$(git rev-parse --short master) &&
 		git checkout L2^0 &&
 
 		# Merge must fail; there is a conflict
@@ -1582,7 +1583,7 @@
 		git cat-file -p R1:a >theirs &&
 		test_must_fail git merge-file --diff3 \
 			-L "Temporary merge branch 1" \
-			-L "merged common ancestors"  \
+			-L "$MASTER"  \
 			-L "Temporary merge branch 2" \
 			ours  \
 			base  \
@@ -1594,7 +1595,7 @@
 		git cat-file -p R1:b >theirs &&
 		test_must_fail git merge-file --diff3 \
 			-L "Temporary merge branch 1" \
-			-L "merged common ancestors"  \
+			-L "$MASTER"  \
 			-L "Temporary merge branch 2" \
 			ours  \
 			base  \
@@ -1732,6 +1733,7 @@
 	(
 		cd virtual_merge_base_has_nested_conflicts &&
 
+		MASTER=$(git rev-parse --short master) &&
 		git checkout L3^0 &&
 
 		# Merge must fail; there is a conflict
@@ -1760,7 +1762,7 @@
 		cp left merged-once &&
 		test_must_fail git merge-file --diff3 \
 			-L "Temporary merge branch 1" \
-			-L "merged common ancestors"  \
+			-L "$MASTER"  \
 			-L "Temporary merge branch 2" \
 			merged-once \
 			base        \
diff --git a/t/t6047-diff3-conflict-markers.sh b/t/t6047-diff3-conflict-markers.sh
new file mode 100755
index 0000000..860542a
--- /dev/null
+++ b/t/t6047-diff3-conflict-markers.sh
@@ -0,0 +1,202 @@
+#!/bin/sh
+
+test_description='recursive merge diff3 style conflict markers'
+
+. ./test-lib.sh
+
+# Setup:
+#          L1
+#            \
+#             ?
+#            /
+#          R1
+#
+# Where:
+#   L1 and R1 both have a file named 'content' but have no common history
+#
+
+test_expect_success 'setup no merge base' '
+	test_create_repo no_merge_base &&
+	(
+		cd no_merge_base &&
+
+		git checkout -b L &&
+		test_commit A content A &&
+
+		git checkout --orphan R &&
+		test_commit B content B
+	)
+'
+
+test_expect_success 'check no merge base' '
+	(
+		cd no_merge_base &&
+
+		git checkout L^0 &&
+
+		test_must_fail git -c merge.conflictstyle=diff3 merge --allow-unrelated-histories -s recursive R^0 &&
+
+		grep "|||||| empty tree" content
+	)
+'
+
+# Setup:
+#          L1
+#         /  \
+#   master    ?
+#         \  /
+#          R1
+#
+# Where:
+#   L1 and R1 have modified the same file ('content') in conflicting ways
+#
+
+test_expect_success 'setup unique merge base' '
+	test_create_repo unique_merge_base &&
+	(
+		cd unique_merge_base &&
+
+		test_commit base content "1
+2
+3
+4
+5
+" &&
+
+		git branch L &&
+		git branch R &&
+
+		git checkout L &&
+		test_commit L content "1
+2
+3
+4
+5
+7" &&
+
+		git checkout R &&
+		git rm content &&
+		test_commit R renamed "1
+2
+3
+4
+5
+six"
+	)
+'
+
+test_expect_success 'check unique merge base' '
+	(
+		cd unique_merge_base &&
+
+		git checkout L^0 &&
+		MASTER=$(git rev-parse --short master) &&
+
+		test_must_fail git -c merge.conflictstyle=diff3 merge -s recursive R^0 &&
+
+		grep "|||||| $MASTER:content" renamed
+	)
+'
+
+# Setup:
+#          L1---L2--L3
+#         /  \ /      \
+#   master    X1       ?
+#         \  / \      /
+#          R1---R2--R3
+#
+# Where:
+#   commits L1 and R1 have modified the same file in non-conflicting ways
+#   X1 is an auto-generated merge-base used when merging L1 and R1
+#   commits L2 and R2 are merges of R1 and L1 into L1 and R1, respectively
+#   commits L3 and R3 both modify 'content' in conflicting ways
+#
+
+test_expect_success 'setup multiple merge bases' '
+	test_create_repo multiple_merge_bases &&
+	(
+		cd multiple_merge_bases &&
+
+		test_commit initial content "1
+2
+3
+4
+5" &&
+
+		git branch L &&
+		git branch R &&
+
+		# Create L1
+		git checkout L &&
+		test_commit L1 content "0
+1
+2
+3
+4
+5" &&
+
+		# Create R1
+		git checkout R &&
+		test_commit R1 content "1
+2
+3
+4
+5
+6" &&
+
+		# Create L2
+		git checkout L &&
+		git merge R1 &&
+
+		# Create R2
+		git checkout R &&
+		git merge L1 &&
+
+		# Create L3
+		git checkout L &&
+		test_commit L3 content "0
+1
+2
+3
+4
+5
+A" &&
+
+		# Create R3
+		git checkout R &&
+		git rm content &&
+		test_commit R3 renamed "0
+2
+3
+4
+5
+six"
+	)
+'
+
+test_expect_success 'check multiple merge bases' '
+	(
+		cd multiple_merge_bases &&
+
+		git checkout L^0 &&
+
+		test_must_fail git -c merge.conflictstyle=diff3 merge -s recursive R^0 &&
+
+		grep "|||||| merged common ancestors:content" renamed
+	)
+'
+
+test_expect_success 'rebase describes fake ancestor base' '
+	test_create_repo rebase &&
+	(
+		cd rebase &&
+		test_commit base file &&
+		test_commit master file &&
+		git checkout -b side HEAD^ &&
+		test_commit side file &&
+		test_must_fail git -c merge.conflictstyle=diff3 rebase master &&
+		grep "||||||| constructed merge base" file
+	)
+'
+
+test_done
diff --git a/t/t6120-describe.sh b/t/t6120-describe.sh
index 2b883d8..45047d0 100755
--- a/t/t6120-describe.sh
+++ b/t/t6120-describe.sh
@@ -424,4 +424,19 @@
 	test_must_fail git describe $ZERO_OID
 '
 
+test_expect_success 'name-rev a rev shortly after epoch' '
+	test_when_finished "git checkout master" &&
+
+	git checkout --orphan no-timestamp-underflow &&
+	# Any date closer to epoch than the CUTOFF_DATE_SLOP constant
+	# in builtin/name-rev.c.
+	GIT_COMMITTER_DATE="@1234 +0000" \
+	git commit -m "committer date shortly after epoch" &&
+	old_commit_oid=$(git rev-parse HEAD) &&
+
+	echo "$old_commit_oid no-timestamp-underflow" >expect &&
+	git name-rev $old_commit_oid >actual &&
+	test_cmp expect actual
+'
+
 test_done
diff --git a/t/t7009-filter-branch-null-sha1.sh b/t/t7008-filter-branch-null-sha1.sh
similarity index 100%
rename from t/t7009-filter-branch-null-sha1.sh
rename to t/t7008-filter-branch-null-sha1.sh
diff --git a/t/t7008-grep-binary.sh b/t/t7008-grep-binary.sh
deleted file mode 100755
index 2d87c49..0000000
--- a/t/t7008-grep-binary.sh
+++ /dev/null
@@ -1,249 +0,0 @@
-#!/bin/sh
-
-test_description='git grep in binary files'
-
-. ./test-lib.sh
-
-nul_match () {
-	matches=$1
-	flags=$2
-	pattern=$3
-	pattern_human=$(echo "$pattern" | sed 's/Q/<NUL>/g')
-
-	if test "$matches" = 1
-	then
-		test_expect_success "git grep -f f $flags '$pattern_human' a" "
-			printf '$pattern' | q_to_nul >f &&
-			git grep -f f $flags a
-		"
-	elif test "$matches" = 0
-	then
-		test_expect_success "git grep -f f $flags '$pattern_human' a" "
-			printf '$pattern' | q_to_nul >f &&
-			test_must_fail git grep -f f $flags a
-		"
-	elif test "$matches" = T1
-	then
-		test_expect_failure "git grep -f f $flags '$pattern_human' a" "
-			printf '$pattern' | q_to_nul >f &&
-			git grep -f f $flags a
-		"
-	elif test "$matches" = T0
-	then
-		test_expect_failure "git grep -f f $flags '$pattern_human' a" "
-			printf '$pattern' | q_to_nul >f &&
-			test_must_fail git grep -f f $flags a
-		"
-	else
-		test_expect_success "PANIC: Test framework error. Unknown matches value $matches" 'false'
-	fi
-}
-
-test_expect_success 'setup' "
-	echo 'binaryQfileQm[*]cQ*æQð' | q_to_nul >a &&
-	git add a &&
-	git commit -m.
-"
-
-test_expect_success 'git grep ina a' '
-	echo Binary file a matches >expect &&
-	git grep ina a >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'git grep -ah ina a' '
-	git grep -ah ina a >actual &&
-	test_cmp a actual
-'
-
-test_expect_success 'git grep -I ina a' '
-	test_must_fail git grep -I ina a >actual &&
-	test_must_be_empty actual
-'
-
-test_expect_success 'git grep -c ina a' '
-	echo a:1 >expect &&
-	git grep -c ina a >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'git grep -l ina a' '
-	echo a >expect &&
-	git grep -l ina a >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'git grep -L bar a' '
-	echo a >expect &&
-	git grep -L bar a >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'git grep -q ina a' '
-	git grep -q ina a >actual &&
-	test_must_be_empty actual
-'
-
-test_expect_success 'git grep -F ile a' '
-	git grep -F ile a
-'
-
-test_expect_success 'git grep -Fi iLE a' '
-	git grep -Fi iLE a
-'
-
-# This test actually passes on platforms where regexec() supports the
-# flag REG_STARTEND.
-test_expect_success 'git grep ile a' '
-	git grep ile a
-'
-
-test_expect_failure 'git grep .fi a' '
-	git grep .fi a
-'
-
-nul_match 1 '-F' 'yQf'
-nul_match 0 '-F' 'yQx'
-nul_match 1 '-Fi' 'YQf'
-nul_match 0 '-Fi' 'YQx'
-nul_match 1 '' 'yQf'
-nul_match 0 '' 'yQx'
-nul_match 1 '' 'æQð'
-nul_match 1 '-F' 'eQm[*]c'
-nul_match 1 '-Fi' 'EQM[*]C'
-
-# Regex patterns that would match but shouldn't with -F
-nul_match 0 '-F' 'yQ[f]'
-nul_match 0 '-F' '[y]Qf'
-nul_match 0 '-Fi' 'YQ[F]'
-nul_match 0 '-Fi' '[Y]QF'
-nul_match 0 '-F' 'æQ[ð]'
-nul_match 0 '-F' '[æ]Qð'
-nul_match 0 '-Fi' 'ÆQ[Ð]'
-nul_match 0 '-Fi' '[Æ]QÐ'
-
-# kwset is disabled on -i & non-ASCII. No way to match non-ASCII \0
-# patterns case-insensitively.
-nul_match T1 '-i' 'ÆQÐ'
-
-# \0 implicitly disables regexes. This is an undocumented internal
-# limitation.
-nul_match T1 '' 'yQ[f]'
-nul_match T1 '' '[y]Qf'
-nul_match T1 '-i' 'YQ[F]'
-nul_match T1 '-i' '[Y]Qf'
-nul_match T1 '' 'æQ[ð]'
-nul_match T1 '' '[æ]Qð'
-nul_match T1 '-i' 'ÆQ[Ð]'
-
-# ... because of \0 implicitly disabling regexes regexes that
-# should/shouldn't match don't do the right thing.
-nul_match T1 '' 'eQm.*cQ'
-nul_match T1 '-i' 'EQM.*cQ'
-nul_match T0 '' 'eQm[*]c'
-nul_match T0 '-i' 'EQM[*]C'
-
-# Due to the REG_STARTEND extension when kwset() is disabled on -i &
-# non-ASCII the string will be matched in its entirety, but the
-# pattern will be cut off at the first \0.
-nul_match 0 '-i' 'NOMATCHQð'
-nul_match T0 '-i' '[Æ]QNOMATCH'
-nul_match T0 '-i' '[æ]QNOMATCH'
-# Matches, but for the wrong reasons, just stops at [æ]
-nul_match 1 '-i' '[Æ]Qð'
-nul_match 1 '-i' '[æ]Qð'
-
-# Ensure that the matcher doesn't regress to something that stops at
-# \0
-nul_match 0 '-F' 'yQ[f]'
-nul_match 0 '-Fi' 'YQ[F]'
-nul_match 0 '' 'yQNOMATCH'
-nul_match 0 '' 'QNOMATCH'
-nul_match 0 '-i' 'YQNOMATCH'
-nul_match 0 '-i' 'QNOMATCH'
-nul_match 0 '-F' 'æQ[ð]'
-nul_match 0 '-Fi' 'ÆQ[Ð]'
-nul_match 0 '' 'yQNÓMATCH'
-nul_match 0 '' 'QNÓMATCH'
-nul_match 0 '-i' 'YQNÓMATCH'
-nul_match 0 '-i' 'QNÓMATCH'
-
-test_expect_success 'grep respects binary diff attribute' '
-	echo text >t &&
-	git add t &&
-	echo t:text >expect &&
-	git grep text t >actual &&
-	test_cmp expect actual &&
-	echo "t -diff" >.gitattributes &&
-	echo "Binary file t matches" >expect &&
-	git grep text t >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'grep --cached respects binary diff attribute' '
-	git grep --cached text t >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'grep --cached respects binary diff attribute (2)' '
-	git add .gitattributes &&
-	rm .gitattributes &&
-	git grep --cached text t >actual &&
-	test_when_finished "git rm --cached .gitattributes" &&
-	test_when_finished "git checkout .gitattributes" &&
-	test_cmp expect actual
-'
-
-test_expect_success 'grep revision respects binary diff attribute' '
-	git commit -m new &&
-	echo "Binary file HEAD:t matches" >expect &&
-	git grep text HEAD -- t >actual &&
-	test_when_finished "git reset HEAD^" &&
-	test_cmp expect actual
-'
-
-test_expect_success 'grep respects not-binary diff attribute' '
-	echo binQary | q_to_nul >b &&
-	git add b &&
-	echo "Binary file b matches" >expect &&
-	git grep bin b >actual &&
-	test_cmp expect actual &&
-	echo "b diff" >.gitattributes &&
-	echo "b:binQary" >expect &&
-	git grep bin b >actual.raw &&
-	nul_to_q <actual.raw >actual &&
-	test_cmp expect actual
-'
-
-cat >nul_to_q_textconv <<'EOF'
-#!/bin/sh
-"$PERL_PATH" -pe 'y/\000/Q/' < "$1"
-EOF
-chmod +x nul_to_q_textconv
-
-test_expect_success 'setup textconv filters' '
-	echo a diff=foo >.gitattributes &&
-	git config diff.foo.textconv "\"$(pwd)\""/nul_to_q_textconv
-'
-
-test_expect_success 'grep does not honor textconv' '
-	test_must_fail git grep Qfile
-'
-
-test_expect_success 'grep --textconv honors textconv' '
-	echo "a:binaryQfileQm[*]cQ*æQð" >expect &&
-	git grep --textconv Qfile >actual &&
-	test_cmp expect actual
-'
-
-test_expect_success 'grep --no-textconv does not honor textconv' '
-	test_must_fail git grep --no-textconv Qfile
-'
-
-test_expect_success 'grep --textconv blob honors textconv' '
-	echo "HEAD:a:binaryQfileQm[*]cQ*æQð" >expect &&
-	git grep --textconv Qfile HEAD:a >actual &&
-	test_cmp expect actual
-'
-
-test_done
diff --git a/t/t7300-clean.sh b/t/t7300-clean.sh
index d01fd12..6e6d24c 100755
--- a/t/t7300-clean.sh
+++ b/t/t7300-clean.sh
@@ -117,6 +117,7 @@
 	would_clean=$(
 		cd docs &&
 		git clean -n ../src |
+		grep part3 |
 		sed -n -e "s|^Would remove ||p"
 	) &&
 	verbose test "$would_clean" = ../src/part3.c
@@ -129,6 +130,7 @@
 	would_clean=$(
 		cd docs &&
 		git clean -n "$(pwd)/../src" |
+		grep part3 |
 		sed -n -e "s|^Would remove ||p"
 	) &&
 	verbose test "$would_clean" = ../src/part3.c
@@ -547,7 +549,7 @@
 	test_path_is_missing strange_bare
 '
 
-test_expect_success 'giving path in nested git work tree will remove it' '
+test_expect_success 'giving path in nested git work tree will NOT remove it' '
 	rm -fr repo &&
 	mkdir repo &&
 	(
@@ -559,7 +561,7 @@
 	git clean -f -d repo/bar/baz &&
 	test_path_is_file repo/.git/HEAD &&
 	test_path_is_dir repo/bar/ &&
-	test_path_is_missing repo/bar/baz
+	test_path_is_file repo/bar/baz/hello.world
 '
 
 test_expect_success 'giving path to nested .git will not remove it' '
@@ -577,7 +579,7 @@
 	test_path_is_dir untracked/
 '
 
-test_expect_success 'giving path to nested .git/ will remove contents' '
+test_expect_success 'giving path to nested .git/ will NOT remove contents' '
 	rm -fr repo untracked &&
 	mkdir repo untracked &&
 	(
@@ -587,7 +589,7 @@
 	) &&
 	git clean -f -d repo/.git/ &&
 	test_path_is_dir repo/.git &&
-	test_dir_is_empty repo/.git &&
+	test_path_is_file repo/.git/HEAD &&
 	test_path_is_dir untracked/
 '
 
@@ -669,7 +671,7 @@
 	test_path_is_missing foo/b/bb
 '
 
-test_expect_failure 'git clean -d skips nested repo containing ignored files' '
+test_expect_success 'git clean -d skips nested repo containing ignored files' '
 	test_when_finished "rm -rf nested-repo-with-ignored-file" &&
 
 	git init nested-repo-with-ignored-file &&
@@ -691,6 +693,38 @@
 	test_path_is_file nested-repo-with-ignored-file/file
 '
 
+test_expect_success 'git clean handles being told what to clean' '
+	mkdir -p d1 d2 &&
+	touch d1/ut d2/ut &&
+	git clean -f */ut &&
+	test_path_is_missing d1/ut &&
+	test_path_is_missing d2/ut
+'
+
+test_expect_success 'git clean handles being told what to clean, with -d' '
+	mkdir -p d1 d2 &&
+	touch d1/ut d2/ut &&
+	git clean -ffd */ut &&
+	test_path_is_missing d1/ut &&
+	test_path_is_missing d2/ut
+'
+
+test_expect_success 'git clean works if a glob is passed without -d' '
+	mkdir -p d1 d2 &&
+	touch d1/ut d2/ut &&
+	git clean -f "*ut" &&
+	test_path_is_missing d1/ut &&
+	test_path_is_missing d2/ut
+'
+
+test_expect_success 'git clean works if a glob is passed with -d' '
+	mkdir -p d1 d2 &&
+	touch d1/ut d2/ut &&
+	git clean -ffd "*ut" &&
+	test_path_is_missing d1/ut &&
+	test_path_is_missing d2/ut
+'
+
 test_expect_success MINGW 'handle clean & core.longpaths = false nicely' '
 	test_config core.longpaths false &&
 	a50=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa &&
diff --git a/t/t7400-submodule-basic.sh b/t/t7400-submodule-basic.sh
index a208cb2..691b5fc 100755
--- a/t/t7400-submodule-basic.sh
+++ b/t/t7400-submodule-basic.sh
@@ -527,7 +527,6 @@
 	test_must_fail git config submodule.example.url &&
 
 	git submodule update init 2> update.out &&
-	cat update.out &&
 	test_i18ngrep "not initialized" update.out &&
 	test_must_fail git rev-parse --resolve-git-dir init/.git &&
 
@@ -545,7 +544,6 @@
 	(
 		cd sub &&
 		git submodule update ../init 2>update.out &&
-		cat update.out &&
 		test_i18ngrep "not initialized" update.out &&
 		test_must_fail git rev-parse --resolve-git-dir ../init/.git &&
 
diff --git a/t/t7505-prepare-commit-msg-hook.sh b/t/t7505-prepare-commit-msg-hook.sh
index ba8bd1b..94f85cd 100755
--- a/t/t7505-prepare-commit-msg-hook.sh
+++ b/t/t7505-prepare-commit-msg-hook.sh
@@ -241,13 +241,7 @@
 			git add b &&
 			git rebase --continue
 		) &&
-		if test "$mode" = -p # reword amended after pick
-		then
-			n=18
-		else
-			n=17
-		fi &&
-		git log --pretty=%s -g -n$n HEAD@{1} >actual &&
+		git log --pretty=%s -g -n18 HEAD@{1} >actual &&
 		test_cmp "$TEST_DIRECTORY/t7505/expected-rebase${mode:--i}" actual
 	'
 }
diff --git a/t/t7505/expected-rebase-i b/t/t7505/expected-rebase-i
index c514bdb..93bada5 100644
--- a/t/t7505/expected-rebase-i
+++ b/t/t7505/expected-rebase-i
@@ -7,7 +7,8 @@
 message [fixup rebase-9]
 message (no editor) [fixup rebase-8]
 message (no editor) [squash rebase-7]
-message [reword rebase-6]
+HEAD [reword rebase-6]
+message (no editor) [reword rebase-6]
 message [squash rebase-5]
 message (no editor) [fixup rebase-4]
 message (no editor) [pick rebase-3]
diff --git a/t/t7812-grep-icase-non-ascii.sh b/t/t7812-grep-icase-non-ascii.sh
index 0c685d3..531eb59 100755
--- a/t/t7812-grep-icase-non-ascii.sh
+++ b/t/t7812-grep-icase-non-ascii.sh
@@ -53,4 +53,32 @@
 	test_cmp expected actual
 '
 
+test_expect_success GETTEXT_LOCALE,LIBPCRE2 'PCRE v2: setup invalid UTF-8 data' '
+	printf "\\200\\n" >invalid-0x80 &&
+	echo "ævar" >expected &&
+	cat expected >>invalid-0x80 &&
+	git add invalid-0x80
+'
+
+test_expect_success GETTEXT_LOCALE,LIBPCRE2 'PCRE v2: grep ASCII from invalid UTF-8 data' '
+	git grep -h "var" invalid-0x80 >actual &&
+	test_cmp expected actual &&
+	git grep -h "(*NO_JIT)var" invalid-0x80 >actual &&
+	test_cmp expected actual
+'
+
+test_expect_success GETTEXT_LOCALE,LIBPCRE2 'PCRE v2: grep non-ASCII from invalid UTF-8 data' '
+	git grep -h "æ" invalid-0x80 >actual &&
+	test_cmp expected actual &&
+	git grep -h "(*NO_JIT)æ" invalid-0x80 &&
+	test_cmp expected actual
+'
+
+test_expect_success GETTEXT_LOCALE,LIBPCRE2 'PCRE v2: grep non-ASCII from invalid UTF-8 data with -i' '
+	test_might_fail git grep -hi "Æ" invalid-0x80 >actual &&
+	test_cmp expected actual &&
+	test_must_fail git grep -hi "(*NO_JIT)Æ" invalid-0x80 &&
+	test_cmp expected actual
+'
+
 test_done
diff --git a/t/t7815-grep-binary.sh b/t/t7815-grep-binary.sh
new file mode 100755
index 0000000..90ebb64
--- /dev/null
+++ b/t/t7815-grep-binary.sh
@@ -0,0 +1,148 @@
+#!/bin/sh
+
+test_description='git grep in binary files'
+
+. ./test-lib.sh
+
+test_expect_success 'setup' "
+	echo 'binaryQfileQm[*]cQ*æQð' | q_to_nul >a &&
+	git add a &&
+	git commit -m.
+"
+
+test_expect_success 'git grep ina a' '
+	echo Binary file a matches >expect &&
+	git grep ina a >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'git grep -ah ina a' '
+	git grep -ah ina a >actual &&
+	test_cmp a actual
+'
+
+test_expect_success 'git grep -I ina a' '
+	test_must_fail git grep -I ina a >actual &&
+	test_must_be_empty actual
+'
+
+test_expect_success 'git grep -c ina a' '
+	echo a:1 >expect &&
+	git grep -c ina a >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'git grep -l ina a' '
+	echo a >expect &&
+	git grep -l ina a >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'git grep -L bar a' '
+	echo a >expect &&
+	git grep -L bar a >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'git grep -q ina a' '
+	git grep -q ina a >actual &&
+	test_must_be_empty actual
+'
+
+test_expect_success 'git grep -F ile a' '
+	git grep -F ile a
+'
+
+test_expect_success 'git grep -Fi iLE a' '
+	git grep -Fi iLE a
+'
+
+# This test actually passes on platforms where regexec() supports the
+# flag REG_STARTEND.
+test_expect_success 'git grep ile a' '
+	git grep ile a
+'
+
+test_expect_failure 'git grep .fi a' '
+	git grep .fi a
+'
+
+test_expect_success 'grep respects binary diff attribute' '
+	echo text >t &&
+	git add t &&
+	echo t:text >expect &&
+	git grep text t >actual &&
+	test_cmp expect actual &&
+	echo "t -diff" >.gitattributes &&
+	echo "Binary file t matches" >expect &&
+	git grep text t >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'grep --cached respects binary diff attribute' '
+	git grep --cached text t >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'grep --cached respects binary diff attribute (2)' '
+	git add .gitattributes &&
+	rm .gitattributes &&
+	git grep --cached text t >actual &&
+	test_when_finished "git rm --cached .gitattributes" &&
+	test_when_finished "git checkout .gitattributes" &&
+	test_cmp expect actual
+'
+
+test_expect_success 'grep revision respects binary diff attribute' '
+	git commit -m new &&
+	echo "Binary file HEAD:t matches" >expect &&
+	git grep text HEAD -- t >actual &&
+	test_when_finished "git reset HEAD^" &&
+	test_cmp expect actual
+'
+
+test_expect_success 'grep respects not-binary diff attribute' '
+	echo binQary | q_to_nul >b &&
+	git add b &&
+	echo "Binary file b matches" >expect &&
+	git grep bin b >actual &&
+	test_cmp expect actual &&
+	echo "b diff" >.gitattributes &&
+	echo "b:binQary" >expect &&
+	git grep bin b >actual.raw &&
+	nul_to_q <actual.raw >actual &&
+	test_cmp expect actual
+'
+
+cat >nul_to_q_textconv <<'EOF'
+#!/bin/sh
+"$PERL_PATH" -pe 'y/\000/Q/' < "$1"
+EOF
+chmod +x nul_to_q_textconv
+
+test_expect_success 'setup textconv filters' '
+	echo a diff=foo >.gitattributes &&
+	git config diff.foo.textconv "\"$(pwd)\""/nul_to_q_textconv
+'
+
+test_expect_success 'grep does not honor textconv' '
+	test_must_fail git grep Qfile
+'
+
+test_expect_success 'grep --textconv honors textconv' '
+	echo "a:binaryQfileQm[*]cQ*æQð" >expect &&
+	git grep --textconv Qfile >actual &&
+	test_cmp expect actual
+'
+
+test_expect_success 'grep --no-textconv does not honor textconv' '
+	test_must_fail git grep --no-textconv Qfile
+'
+
+test_expect_success 'grep --textconv blob honors textconv' '
+	echo "HEAD:a:binaryQfileQm[*]cQ*æQð" >expect &&
+	git grep --textconv Qfile HEAD:a >actual &&
+	test_cmp expect actual
+'
+
+test_done
diff --git a/t/t7816-grep-binary-pattern.sh b/t/t7816-grep-binary-pattern.sh
new file mode 100755
index 0000000..60bab29
--- /dev/null
+++ b/t/t7816-grep-binary-pattern.sh
@@ -0,0 +1,127 @@
+#!/bin/sh
+
+test_description='git grep with a binary pattern files'
+
+. ./lib-gettext.sh
+
+nul_match_internal () {
+	matches=$1
+	prereqs=$2
+	lc_all=$3
+	extra_flags=$4
+	flags=$5
+	pattern=$6
+	pattern_human=$(echo "$pattern" | sed 's/Q/<NUL>/g')
+
+	if test "$matches" = 1
+	then
+		test_expect_success $prereqs "LC_ALL='$lc_all' git grep $extra_flags -f f $flags '$pattern_human' a" "
+			printf '$pattern' | q_to_nul >f &&
+			LC_ALL='$lc_all' git grep $extra_flags -f f $flags a
+		"
+	elif test "$matches" = 0
+	then
+		test_expect_success $prereqs "LC_ALL='$lc_all' git grep $extra_flags -f f $flags '$pattern_human' a" "
+			>stderr &&
+			printf '$pattern' | q_to_nul >f &&
+			test_must_fail env LC_ALL=\"$lc_all\" git grep $extra_flags -f f $flags a 2>stderr &&
+			test_i18ngrep ! 'This is only supported with -P under PCRE v2' stderr
+		"
+	elif test "$matches" = P
+	then
+		test_expect_success $prereqs "error, PCRE v2 only: LC_ALL='$lc_all' git grep -f f $flags '$pattern_human' a" "
+			>stderr &&
+			printf '$pattern' | q_to_nul >f &&
+			test_must_fail env LC_ALL=\"$lc_all\" git grep -f f $flags a 2>stderr &&
+			test_i18ngrep 'This is only supported with -P under PCRE v2' stderr
+		"
+	else
+		test_expect_success "PANIC: Test framework error. Unknown matches value $matches" 'false'
+	fi
+}
+
+nul_match () {
+	matches=$1
+	matches_pcre2=$2
+	matches_pcre2_locale=$3
+	flags=$4
+	pattern=$5
+	pattern_human=$(echo "$pattern" | sed 's/Q/<NUL>/g')
+
+	nul_match_internal "$matches" "" "C" "" "$flags" "$pattern"
+	nul_match_internal "$matches_pcre2" "LIBPCRE2" "C" "-P" "$flags" "$pattern"
+	nul_match_internal "$matches_pcre2_locale" "LIBPCRE2,GETTEXT_LOCALE" "$is_IS_locale" "-P" "$flags" "$pattern"
+}
+
+test_expect_success 'setup' "
+	echo 'binaryQfileQm[*]cQ*æQð' | q_to_nul >a &&
+	git add a &&
+	git commit -m.
+"
+
+# Simple fixed-string matching that can use kwset (no -i && non-ASCII)
+nul_match P P P '-F' 'yQf'
+nul_match P P P '-F' 'yQx'
+nul_match P P P '-Fi' 'YQf'
+nul_match P P P '-Fi' 'YQx'
+nul_match P P 1 '' 'yQf'
+nul_match P P 0 '' 'yQx'
+nul_match P P 1 '' 'æQð'
+nul_match P P P '-F' 'eQm[*]c'
+nul_match P P P '-Fi' 'EQM[*]C'
+
+# Regex patterns that would match but shouldn't with -F
+nul_match P P P '-F' 'yQ[f]'
+nul_match P P P '-F' '[y]Qf'
+nul_match P P P '-Fi' 'YQ[F]'
+nul_match P P P '-Fi' '[Y]QF'
+nul_match P P P '-F' 'æQ[ð]'
+nul_match P P P '-F' '[æ]Qð'
+
+# The -F kwset codepath can't handle -i && non-ASCII...
+nul_match P 1 1 '-i' '[æ]Qð'
+
+# ...PCRE v2 only matches non-ASCII with -i casefolding under UTF-8
+# semantics
+nul_match P P P '-Fi' 'ÆQ[Ð]'
+nul_match P 0 1 '-i'  'ÆQ[Ð]'
+nul_match P 0 1 '-i'  '[Æ]QÐ'
+nul_match P 0 1 '-i' '[Æ]Qð'
+nul_match P 0 1 '-i' 'ÆQÐ'
+
+# \0 in regexes can only work with -P & PCRE v2
+nul_match P P 1 '' 'yQ[f]'
+nul_match P P 1 '' '[y]Qf'
+nul_match P P 1 '-i' 'YQ[F]'
+nul_match P P 1 '-i' '[Y]Qf'
+nul_match P P 1 '' 'æQ[ð]'
+nul_match P P 1 '' '[æ]Qð'
+nul_match P P 1 '-i' 'ÆQ[Ð]'
+nul_match P P 1 '' 'eQm.*cQ'
+nul_match P P 1 '-i' 'EQM.*cQ'
+nul_match P P 0 '' 'eQm[*]c'
+nul_match P P 0 '-i' 'EQM[*]C'
+
+# Assert that we're using REG_STARTEND and the pattern doesn't match
+# just because it's cut off at the first \0.
+nul_match P P 0 '-i' 'NOMATCHQð'
+nul_match P P 0 '-i' '[Æ]QNOMATCH'
+nul_match P P 0 '-i' '[æ]QNOMATCH'
+
+# Ensure that the matcher doesn't regress to something that stops at
+# \0
+nul_match P P P '-F' 'yQ[f]'
+nul_match P P P '-Fi' 'YQ[F]'
+nul_match P P 0 '' 'yQNOMATCH'
+nul_match P P 0 '' 'QNOMATCH'
+nul_match P P 0 '-i' 'YQNOMATCH'
+nul_match P P 0 '-i' 'QNOMATCH'
+nul_match P P P '-F' 'æQ[ð]'
+nul_match P P P '-Fi' 'ÆQ[Ð]'
+nul_match P P 1 '-i' 'ÆQ[Ð]'
+nul_match P P 0 '' 'yQNÓMATCH'
+nul_match P P 0 '' 'QNÓMATCH'
+nul_match P P 0 '-i' 'YQNÓMATCH'
+nul_match P P 0 '-i' 'QNÓMATCH'
+
+test_done
diff --git a/t/t9300-fast-import.sh b/t/t9300-fast-import.sh
index 141b7fa..e707fb8 100755
--- a/t/t9300-fast-import.sh
+++ b/t/t9300-fast-import.sh
@@ -85,6 +85,36 @@
 	An annotated tag that annotates a blob.
 	EOF
 
+	tag to-be-deleted
+	from :3
+	data <<EOF
+	Another annotated tag that annotates a blob.
+	EOF
+
+	reset refs/tags/to-be-deleted
+	from 0000000000000000000000000000000000000000
+
+	tag nested
+	mark :6
+	from :4
+	data <<EOF
+	Tag of our lovely commit
+	EOF
+
+	reset refs/tags/nested
+	from 0000000000000000000000000000000000000000
+
+	tag nested
+	mark :7
+	from :6
+	data <<EOF
+	Tag of tag of our lovely commit
+	EOF
+
+	alias
+	mark :8
+	to :5
+
 	INPUT_END
 	git fast-import --export-marks=marks.out <input &&
 	git whatchanged master
@@ -157,12 +187,19 @@
 	test_cmp expect actual
 '
 
+test_expect_success 'A: verify tag deletion is successful' '
+	test_must_fail git rev-parse --verify refs/tags/to-be-deleted
+'
+
 test_expect_success 'A: verify marks output' '
 	cat >expect <<-EOF &&
 	:2 $(git rev-parse --verify master:file2)
 	:3 $(git rev-parse --verify master:file3)
 	:4 $(git rev-parse --verify master:file4)
 	:5 $(git rev-parse --verify master^0)
+	:6 $(git cat-file tag nested | grep object | cut -d" " -f 2)
+	:7 $(git rev-parse --verify nested)
+	:8 $(git rev-parse --verify master^0)
 	EOF
 	test_cmp expect marks.out
 '
@@ -2781,7 +2818,6 @@
 	COMMIT
 	M 100644 :403x hello.c
 	EOF
-	cat err &&
 	test_i18ngrep "space after mark" err
 '
 
@@ -2798,7 +2834,6 @@
 	inline
 	BLOB
 	EOF
-	cat err &&
 	test_i18ngrep "nvalid dataref" err
 '
 
@@ -2812,7 +2847,6 @@
 	COMMIT
 	M 100644 ${sha1}x hello.c
 	EOF
-	cat err &&
 	test_i18ngrep "space after SHA1" err
 '
 
@@ -2828,7 +2862,6 @@
 	COMMIT
 	N :202x :302
 	EOF
-	cat err &&
 	test_i18ngrep "space after mark" err
 '
 
@@ -2844,7 +2877,6 @@
 	note blob
 	BLOB
 	EOF
-	cat err &&
 	test_i18ngrep "nvalid dataref" err
 '
 
@@ -2858,7 +2890,6 @@
 	COMMIT
 	N ${sha1}x :302
 	EOF
-	cat err &&
 	test_i18ngrep "space after SHA1" err
 '
 
@@ -2874,7 +2905,6 @@
 	COMMIT
 	N :202 :302x
 	EOF
-	cat err &&
 	test_i18ngrep "after mark" err
 '
 
@@ -2908,7 +2938,6 @@
 	EOF
 
 	# now evaluate the error
-	cat err &&
 	test_i18ngrep "after mark" err
 '
 
@@ -2928,7 +2957,6 @@
 	merge :303x
 	M 100644 :403 hello.c
 	EOF
-	cat err &&
 	test_i18ngrep "after mark" err
 '
 
@@ -2944,7 +2972,6 @@
 	tag S
 	TAG
 	EOF
-	cat err &&
 	test_i18ngrep "after mark" err
 '
 
@@ -2955,7 +2982,6 @@
 	test_must_fail git fast-import --import-marks=marks <<-EOF 2>err &&
 	cat-blob :403x
 	EOF
-	cat err &&
 	test_i18ngrep "after mark" err
 '
 
@@ -2966,7 +2992,6 @@
 	test_must_fail git fast-import --import-marks=marks <<-EOF 2>err &&
 	ls :302x hello.c
 	EOF
-	cat err &&
 	test_i18ngrep "space after mark" err
 '
 
@@ -2975,7 +3000,6 @@
 	test_must_fail git fast-import --import-marks=marks <<-EOF 2>err &&
 	ls ${sha1}x hello.c
 	EOF
-	cat err &&
 	test_i18ngrep "space after tree-ish" err
 '
 
diff --git a/t/t9350-fast-export.sh b/t/t9350-fast-export.sh
index b4004e0..2e4e214 100755
--- a/t/t9350-fast-export.sh
+++ b/t/t9350-fast-export.sh
@@ -53,6 +53,33 @@
 
 '
 
+test_expect_success 'fast-export ^muss^{commit} muss' '
+	git fast-export --tag-of-filtered-object=rewrite ^muss^{commit} muss >actual &&
+	cat >expected <<-EOF &&
+	tag muss
+	from $(git rev-parse --verify muss^{commit})
+	$(git cat-file tag muss | grep tagger)
+	data 9
+	valentin
+
+	EOF
+	test_cmp expected actual
+'
+
+test_expect_success 'fast-export --mark-tags ^muss^{commit} muss' '
+	git fast-export --mark-tags --tag-of-filtered-object=rewrite ^muss^{commit} muss >actual &&
+	cat >expected <<-EOF &&
+	tag muss
+	mark :1
+	from $(git rev-parse --verify muss^{commit})
+	$(git cat-file tag muss | grep tagger)
+	data 9
+	valentin
+
+	EOF
+	test_cmp expected actual
+'
+
 test_expect_success 'fast-export master~2..master' '
 
 	git fast-export master~2..master >actual &&
@@ -513,10 +540,41 @@
 '
 
 # NEEDSWORK: not just check return status, but validate the output
+# Note that these tests DO NOTHING other than print a warning that
+# they are ommitting the one tag we asked them to export (because the
+# tags resolve to a tree).  They exist just to make sure we do not
+# abort but instead just warn.
 test_expect_success 'tree_tag-obj'    'git fast-export tree_tag-obj'
 test_expect_success 'tag-obj_tag'     'git fast-export tag-obj_tag'
 test_expect_success 'tag-obj_tag-obj' 'git fast-export tag-obj_tag-obj'
 
+test_expect_success 'handling tags of blobs' '
+	git tag -a -m "Tag of a blob" blobtag $(git rev-parse master:file) &&
+	git fast-export blobtag >actual &&
+	cat >expect <<-EOF &&
+	blob
+	mark :1
+	data 9
+	die Luft
+
+	tag blobtag
+	from :1
+	tagger $GIT_COMMITTER_NAME <$GIT_COMMITTER_EMAIL> $GIT_COMMITTER_DATE
+	data 14
+	Tag of a blob
+
+	EOF
+	test_cmp expect actual
+'
+
+test_expect_success 'handling nested tags' '
+	git tag -a -m "This is a nested tag" nested muss &&
+	git fast-export --mark-tags nested >output &&
+	grep "^from $ZERO_OID$" output &&
+	grep "^tag nested$" output >tag_lines &&
+	test_line_count = 2 tag_lines
+'
+
 test_expect_success 'directory becomes symlink'        '
 	git init dirtosymlink &&
 	git init result &&
@@ -567,17 +625,15 @@
 '
 
 test_expect_success 'test bidirectionality' '
-	>marks-cur &&
-	>marks-new &&
 	git init marks-test &&
-	git fast-export --export-marks=marks-cur --import-marks=marks-cur --branches | \
-	git --git-dir=marks-test/.git fast-import --export-marks=marks-new --import-marks=marks-new &&
+	git fast-export --export-marks=marks-cur --import-marks-if-exists=marks-cur --branches | \
+	git --git-dir=marks-test/.git fast-import --export-marks=marks-new --import-marks-if-exists=marks-new &&
 	(cd marks-test &&
 	git reset --hard &&
 	echo Wohlauf > file &&
 	git commit -a -m "back in time") &&
-	git --git-dir=marks-test/.git fast-export --export-marks=marks-new --import-marks=marks-new --branches | \
-	git fast-import --export-marks=marks-cur --import-marks=marks-cur
+	git --git-dir=marks-test/.git fast-export --export-marks=marks-new --import-marks-if-exists=marks-new --branches | \
+	git fast-import --export-marks=marks-cur --import-marks-if-exists=marks-cur
 '
 
 cat > expected << EOF
diff --git a/t/test-lib-functions.sh b/t/test-lib-functions.sh
index 87bf3a2..b299ecc 100644
--- a/t/test-lib-functions.sh
+++ b/t/test-lib-functions.sh
@@ -228,9 +228,11 @@
 # can be a tag pointing to the commit-to-merge.
 
 test_merge () {
+	label="$1" &&
+	shift &&
 	test_tick &&
-	git merge -m "$1" "$2" &&
-	git tag "$1"
+	git merge -m "$label" "$@" &&
+	git tag "$label"
 }
 
 # Efficiently create <nr> commits, each with a unique number (from 1 to <nr>
diff --git a/t/test-lib.sh b/t/test-lib.sh
index ee602c4..e06fa02 100644
--- a/t/test-lib.sh
+++ b/t/test-lib.sh
@@ -572,6 +572,7 @@
 
 error () {
 	say_color error "error: $*"
+	finalize_junit_xml
 	GIT_EXIT_OK=t
 	exit 1
 }
@@ -700,7 +701,7 @@
 	say_color error "not ok $test_count - $1"
 	shift
 	printf '%s\n' "$*" | sed -e 's/^/#	/'
-	test "$immediate" = "" || { GIT_EXIT_OK=t; exit 1; }
+	test "$immediate" = "" || { finalize_junit_xml; GIT_EXIT_OK=t; exit 1; }
 }
 
 test_known_broken_ok_ () {
@@ -1068,6 +1069,25 @@
 	junit_have_testcase=t
 }
 
+finalize_junit_xml () {
+	if test -n "$write_junit_xml" && test -n "$junit_xml_path"
+	then
+		test -n "$junit_have_testcase" || {
+			junit_start=$(test-tool date getnanos)
+			write_junit_xml_testcase "all tests skipped"
+		}
+
+		# adjust the overall time
+		junit_time=$(test-tool date getnanos $junit_suite_start)
+		sed "s/<testsuite [^>]*/& time=\"$junit_time\"/" \
+			<"$junit_xml_path" >"$junit_xml_path.new"
+		mv "$junit_xml_path.new" "$junit_xml_path"
+
+		write_junit_xml "  </testsuite>" "</testsuites>"
+		write_junit_xml=
+	fi
+}
+
 test_atexit_cleanup=:
 test_atexit_handler () {
 	# In a succeeding test script 'test_atexit_handler' is invoked
@@ -1090,21 +1110,7 @@
 	# removed, so the commands can access pidfiles and socket files.
 	test_atexit_handler
 
-	if test -n "$write_junit_xml" && test -n "$junit_xml_path"
-	then
-		test -n "$junit_have_testcase" || {
-			junit_start=$(test-tool date getnanos)
-			write_junit_xml_testcase "all tests skipped"
-		}
-
-		# adjust the overall time
-		junit_time=$(test-tool date getnanos $junit_suite_start)
-		sed "s/<testsuite [^>]*/& time=\"$junit_time\"/" \
-			<"$junit_xml_path" >"$junit_xml_path.new"
-		mv "$junit_xml_path.new" "$junit_xml_path"
-
-		write_junit_xml "  </testsuite>" "</testsuites>"
-	fi
+	finalize_junit_xml
 
 	if test -z "$HARNESS_ACTIVE"
 	then
diff --git a/trace2/tr2_dst.c b/trace2/tr2_dst.c
index c698575..ae052a0 100644
--- a/trace2/tr2_dst.c
+++ b/trace2/tr2_dst.c
@@ -8,6 +8,19 @@
  */
 #define MAX_AUTO_ATTEMPTS 10
 
+/*
+ * Sentinel file used to detect when we should discard new traces to avoid
+ * writing too many trace files to a directory.
+ */
+#define DISCARD_SENTINEL_NAME "git-trace2-discard"
+
+/*
+ * When set to zero, disables directory file count checks. Otherwise, controls
+ * how many files we can write to a directory before entering discard mode.
+ * This can be overridden via the TR2_SYSENV_MAX_FILES setting.
+ */
+static int tr2env_max_files = 0;
+
 static int tr2_dst_want_warning(void)
 {
 	static int tr2env_dst_debug = -1;
@@ -32,9 +45,75 @@
 	dst->need_close = 0;
 }
 
+/*
+ * Check to make sure we're not overloading the target directory with too many
+ * files. First get the threshold (if present) from the config or envvar. If
+ * it's zero or unset, disable this check. Next check for the presence of a
+ * sentinel file, then check file count.
+ *
+ * Returns 0 if tracing should proceed as normal. Returns 1 if the sentinel file
+ * already exists, which means tracing should be disabled. Returns -1 if there
+ * are too many files but there was no sentinel file, which means we have
+ * created and should write traces to the sentinel file.
+ *
+ * We expect that some trace processing system is gradually collecting files
+ * from the target directory; after it removes the sentinel file we'll start
+ * writing traces again.
+ */
+static int tr2_dst_too_many_files(struct tr2_dst *dst, const char *tgt_prefix)
+{
+	int file_count = 0, max_files = 0, ret = 0;
+	const char *max_files_var;
+	DIR *dirp;
+	struct strbuf path = STRBUF_INIT, sentinel_path = STRBUF_INIT;
+	struct stat statbuf;
+
+	/* Get the config or envvar and decide if we should continue this check */
+	max_files_var = tr2_sysenv_get(TR2_SYSENV_MAX_FILES);
+	if (max_files_var && *max_files_var && ((max_files = atoi(max_files_var)) >= 0))
+		tr2env_max_files = max_files;
+
+	if (!tr2env_max_files) {
+		ret = 0;
+		goto cleanup;
+	}
+
+	strbuf_addstr(&path, tgt_prefix);
+	if (!is_dir_sep(path.buf[path.len - 1])) {
+		strbuf_addch(&path, '/');
+	}
+
+	/* check sentinel */
+	strbuf_addbuf(&sentinel_path, &path);
+	strbuf_addstr(&sentinel_path, DISCARD_SENTINEL_NAME);
+	if (!stat(sentinel_path.buf, &statbuf)) {
+		ret = 1;
+		goto cleanup;
+	}
+
+	/* check file count */
+	dirp = opendir(path.buf);
+	while (file_count < tr2env_max_files && dirp && readdir(dirp))
+		file_count++;
+	if (dirp)
+		closedir(dirp);
+
+	if (file_count >= tr2env_max_files) {
+		dst->too_many_files = 1;
+		dst->fd = open(sentinel_path.buf, O_WRONLY | O_CREAT | O_EXCL, 0666);
+		ret = -1;
+		goto cleanup;
+	}
+
+cleanup:
+	strbuf_release(&path);
+	strbuf_release(&sentinel_path);
+	return ret;
+}
+
 static int tr2_dst_try_auto_path(struct tr2_dst *dst, const char *tgt_prefix)
 {
-	int fd;
+	int too_many_files;
 	const char *last_slash, *sid = tr2_sid_get();
 	struct strbuf path = STRBUF_INIT;
 	size_t base_path_len;
@@ -50,18 +129,29 @@
 	strbuf_addstr(&path, sid);
 	base_path_len = path.len;
 
-	for (attempt_count = 0; attempt_count < MAX_AUTO_ATTEMPTS; attempt_count++) {
-		if (attempt_count > 0) {
-			strbuf_setlen(&path, base_path_len);
-			strbuf_addf(&path, ".%d", attempt_count);
-		}
+	too_many_files = tr2_dst_too_many_files(dst, tgt_prefix);
+	if (!too_many_files) {
+		for (attempt_count = 0; attempt_count < MAX_AUTO_ATTEMPTS; attempt_count++) {
+			if (attempt_count > 0) {
+				strbuf_setlen(&path, base_path_len);
+				strbuf_addf(&path, ".%d", attempt_count);
+			}
 
-		fd = open(path.buf, O_WRONLY | O_CREAT | O_EXCL, 0666);
-		if (fd != -1)
-			break;
+			dst->fd = open(path.buf, O_WRONLY | O_CREAT | O_EXCL, 0666);
+			if (dst->fd != -1)
+				break;
+		}
+	} else if (too_many_files == 1) {
+		strbuf_release(&path);
+		if (tr2_dst_want_warning())
+			warning("trace2: not opening %s trace file due to too "
+				"many files in target directory %s",
+				tr2_sysenv_display_name(dst->sysenv_var),
+				tgt_prefix);
+		return 0;
 	}
 
-	if (fd == -1) {
+	if (dst->fd == -1) {
 		if (tr2_dst_want_warning())
 			warning("trace2: could not open '%.*s' for '%s' tracing: %s",
 				(int) base_path_len, path.buf,
@@ -75,7 +165,6 @@
 
 	strbuf_release(&path);
 
-	dst->fd = fd;
 	dst->need_close = 1;
 	dst->initialized = 1;
 
diff --git a/trace2/tr2_dst.h b/trace2/tr2_dst.h
index 3adf3ba..b1a8c14 100644
--- a/trace2/tr2_dst.h
+++ b/trace2/tr2_dst.h
@@ -9,6 +9,7 @@
 	int fd;
 	unsigned int initialized : 1;
 	unsigned int need_close : 1;
+	unsigned int too_many_files : 1;
 };
 
 /*
diff --git a/trace2/tr2_sysenv.c b/trace2/tr2_sysenv.c
index 5958cfc..3c3792e 100644
--- a/trace2/tr2_sysenv.c
+++ b/trace2/tr2_sysenv.c
@@ -49,6 +49,9 @@
 				       "trace2.perftarget" },
 	[TR2_SYSENV_PERF_BRIEF]    = { "GIT_TRACE2_PERF_BRIEF",
 				       "trace2.perfbrief" },
+
+	[TR2_SYSENV_MAX_FILES]     = { "GIT_TRACE2_MAX_FILES",
+				       "trace2.maxfiles" },
 };
 /* clang-format on */
 
diff --git a/trace2/tr2_sysenv.h b/trace2/tr2_sysenv.h
index 8dd82a7..d4364a7 100644
--- a/trace2/tr2_sysenv.h
+++ b/trace2/tr2_sysenv.h
@@ -24,6 +24,8 @@
 	TR2_SYSENV_PERF,
 	TR2_SYSENV_PERF_BRIEF,
 
+	TR2_SYSENV_MAX_FILES,
+
 	TR2_SYSENV_MUST_BE_LAST
 };
 
diff --git a/trace2/tr2_tgt_event.c b/trace2/tr2_tgt_event.c
index 9bcac20..6353e8a 100644
--- a/trace2/tr2_tgt_event.c
+++ b/trace2/tr2_tgt_event.c
@@ -10,16 +10,17 @@
 #include "trace2/tr2_tgt.h"
 #include "trace2/tr2_tls.h"
 
-static struct tr2_dst tr2dst_event = { TR2_SYSENV_EVENT, 0, 0, 0 };
+static struct tr2_dst tr2dst_event = { TR2_SYSENV_EVENT, 0, 0, 0, 0 };
 
 /*
- * The version number of the JSON data generated by the EVENT target
- * in this source file.  Update this if you make a significant change
- * to the JSON fields or message structure.  You probably do not need
- * to update this if you just add another call to one of the existing
- * TRACE2 API methods.
+ * The version number of the JSON data generated by the EVENT target in this
+ * source file. The version should be incremented if new event types are added,
+ * if existing fields are removed, or if there are significant changes in
+ * interpretation of existing events or fields. Smaller changes, such as adding
+ * a new field to an existing event, do not require an increment to the EVENT
+ * format version.
  */
-#define TR2_EVENT_VERSION "1"
+#define TR2_EVENT_VERSION "2"
 
 /*
  * Region nesting limit for messages written to the event target.
@@ -107,6 +108,19 @@
 		jw_object_intmax(jw, "repo", repo->trace2_repo_id);
 }
 
+static void fn_too_many_files_fl(const char *file, int line)
+{
+	const char *event_name = "too_many_files";
+	struct json_writer jw = JSON_WRITER_INIT;
+
+	jw_object_begin(&jw, 0);
+	event_fmt_prepare(event_name, file, line, NULL, &jw);
+	jw_end(&jw);
+
+	tr2_dst_write_line(&tr2dst_event, &jw.json);
+	jw_release(&jw);
+}
+
 static void fn_version_fl(const char *file, int line)
 {
 	const char *event_name = "version";
@@ -120,6 +134,9 @@
 
 	tr2_dst_write_line(&tr2dst_event, &jw.json);
 	jw_release(&jw);
+
+	if (tr2dst_event.too_many_files)
+		fn_too_many_files_fl(file, line);
 }
 
 static void fn_start_fl(const char *file, int line,
diff --git a/trace2/tr2_tgt_normal.c b/trace2/tr2_tgt_normal.c
index 438ed05..31b602c 100644
--- a/trace2/tr2_tgt_normal.c
+++ b/trace2/tr2_tgt_normal.c
@@ -9,7 +9,7 @@
 #include "trace2/tr2_tgt.h"
 #include "trace2/tr2_tls.h"
 
-static struct tr2_dst tr2dst_normal = { TR2_SYSENV_NORMAL, 0, 0, 0 };
+static struct tr2_dst tr2dst_normal = { TR2_SYSENV_NORMAL, 0, 0, 0, 0 };
 
 /*
  * Use the TR2_SYSENV_NORMAL_BRIEF setting to omit the "<time> <file>:<line>"
diff --git a/trace2/tr2_tgt_perf.c b/trace2/tr2_tgt_perf.c
index fd979db..ffac802 100644
--- a/trace2/tr2_tgt_perf.c
+++ b/trace2/tr2_tgt_perf.c
@@ -11,7 +11,7 @@
 #include "trace2/tr2_tgt.h"
 #include "trace2/tr2_tls.h"
 
-static struct tr2_dst tr2dst_perf = { TR2_SYSENV_PERF, 0, 0, 0 };
+static struct tr2_dst tr2dst_perf = { TR2_SYSENV_PERF, 0, 0, 0, 0 };
 
 /*
  * Use TR2_SYSENV_PERF_BRIEF to omit the "<time> <file>:<line>"
diff --git a/unpack-trees.c b/unpack-trees.c
index f0f56d4..33ea781 100644
--- a/unpack-trees.c
+++ b/unpack-trees.c
@@ -11,7 +11,6 @@
 #include "refs.h"
 #include "attr.h"
 #include "split-index.h"
-#include "dir.h"
 #include "submodule.h"
 #include "submodule-config.h"
 #include "fsmonitor.h"
diff --git a/wrapper.c b/wrapper.c
index c55d772..e1eaef2 100644
--- a/wrapper.c
+++ b/wrapper.c
@@ -441,7 +441,9 @@
 		"abcdefghijklmnopqrstuvwxyz"
 		"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
 		"0123456789";
-	static const int num_letters = 62;
+	static const int num_letters = ARRAY_SIZE(letters) - 1;
+	static const char x_pattern[] = "XXXXXX";
+	static const int num_x = ARRAY_SIZE(x_pattern) - 1;
 	uint64_t value;
 	struct timeval tv;
 	char *filename_template;
@@ -450,12 +452,12 @@
 
 	len = strlen(pattern);
 
-	if (len < 6 + suffix_len) {
+	if (len < num_x + suffix_len) {
 		errno = EINVAL;
 		return -1;
 	}
 
-	if (strncmp(&pattern[len - 6 - suffix_len], "XXXXXX", 6)) {
+	if (strncmp(&pattern[len - num_x - suffix_len], x_pattern, num_x)) {
 		errno = EINVAL;
 		return -1;
 	}
@@ -466,16 +468,15 @@
 	 */
 	gettimeofday(&tv, NULL);
 	value = ((uint64_t)tv.tv_usec << 16) ^ tv.tv_sec ^ getpid();
-	filename_template = &pattern[len - 6 - suffix_len];
+	filename_template = &pattern[len - num_x - suffix_len];
 	for (count = 0; count < TMP_MAX; ++count) {
 		uint64_t v = value;
+		int i;
 		/* Fill in the random bits. */
-		filename_template[0] = letters[v % num_letters]; v /= num_letters;
-		filename_template[1] = letters[v % num_letters]; v /= num_letters;
-		filename_template[2] = letters[v % num_letters]; v /= num_letters;
-		filename_template[3] = letters[v % num_letters]; v /= num_letters;
-		filename_template[4] = letters[v % num_letters]; v /= num_letters;
-		filename_template[5] = letters[v % num_letters]; v /= num_letters;
+		for (i = 0; i < num_x; i++) {
+			filename_template[i] = letters[v % num_letters];
+			v /= num_letters;
+		}
 
 		fd = open(pattern, O_CREAT | O_EXCL | O_RDWR, mode);
 		if (fd >= 0)
diff --git a/wt-status.c b/wt-status.c
index ad6282c..cc6f945 100644
--- a/wt-status.c
+++ b/wt-status.c
@@ -2029,7 +2029,7 @@
 	char eol = s->null_termination ? '\0' : '\n';
 
 	fprintf(s->fp, "# branch.oid %s%c",
-			(s->is_initial ? "(initial)" : sha1_to_hex(s->sha1_commit)),
+			(s->is_initial ? "(initial)" : oid_to_hex(&s->oid_commit)),
 			eol);
 
 	if (!s->branch)
diff --git a/wt-status.h b/wt-status.h
index 77dad5b..71c3f25 100644
--- a/wt-status.h
+++ b/wt-status.h
@@ -116,7 +116,7 @@
 	int rename_limit;
 	enum wt_status_format status_format;
 	struct wt_status_state state;
-	unsigned char sha1_commit[GIT_MAX_RAWSZ]; /* when not Initial */
+	struct object_id oid_commit; /* when not Initial */
 
 	/* These are computed during processing of the individual sections */
 	int committable;
diff --git a/xdiff/xdiffi.c b/xdiff/xdiffi.c
index 1f1f4a3..bd03513 100644
--- a/xdiff/xdiffi.c
+++ b/xdiff/xdiffi.c
@@ -38,9 +38,9 @@
  * Basically considers a "box" (off1, off2, lim1, lim2) and scan from both
  * the forward diagonal starting from (off1, off2) and the backward diagonal
  * starting from (lim1, lim2). If the K values on the same diagonal crosses
- * returns the furthest point of reach. We might end up having to expensive
- * cases using this algorithm is full, so a little bit of heuristic is needed
- * to cut the search and to return a suboptimal point.
+ * returns the furthest point of reach. We might encounter expensive edge cases
+ * using this algorithm, so a little bit of heuristic is needed to cut the
+ * search and to return a suboptimal point.
  */
 static long xdl_split(unsigned long const *ha1, long off1, long lim1,
 		      unsigned long const *ha2, long off2, long lim2,
@@ -63,11 +63,13 @@
 		int got_snake = 0;
 
 		/*
-		 * We need to extent the diagonal "domain" by one. If the next
+		 * We need to extend the diagonal "domain" by one. If the next
 		 * values exits the box boundaries we need to change it in the
-		 * opposite direction because (max - min) must be a power of two.
+		 * opposite direction because (max - min) must be a power of
+		 * two.
+		 *
 		 * Also we initialize the external K value to -1 so that we can
-		 * avoid extra conditions check inside the core loop.
+		 * avoid extra conditions in the check inside the core loop.
 		 */
 		if (fmin > dmin)
 			kvdf[--fmin - 1] = -1;
@@ -98,11 +100,13 @@
 		}
 
 		/*
-		 * We need to extent the diagonal "domain" by one. If the next
+		 * We need to extend the diagonal "domain" by one. If the next
 		 * values exits the box boundaries we need to change it in the
-		 * opposite direction because (max - min) must be a power of two.
+		 * opposite direction because (max - min) must be a power of
+		 * two.
+		 *
 		 * Also we initialize the external K value to -1 so that we can
-		 * avoid extra conditions check inside the core loop.
+		 * avoid extra conditions in the check inside the core loop.
 		 */
 		if (bmin > dmin)
 			kvdb[--bmin - 1] = XDL_LINE_MAX;
@@ -138,7 +142,7 @@
 		/*
 		 * If the edit cost is above the heuristic trigger and if
 		 * we got a good snake, we sample current diagonals to see
-		 * if some of the, have reached an "interesting" path. Our
+		 * if some of them have reached an "interesting" path. Our
 		 * measure is a function of the distance from the diagonal
 		 * corner (i1 + i2) penalized with the distance from the
 		 * mid diagonal itself. If this value is above the current
@@ -196,8 +200,9 @@
 		}
 
 		/*
-		 * Enough is enough. We spent too much time here and now we collect
-		 * the furthest reaching path using the (i1 + i2) measure.
+		 * Enough is enough. We spent too much time here and now we
+		 * collect the furthest reaching path using the (i1 + i2)
+		 * measure.
 		 */
 		if (ec >= xenv->mxcost) {
 			long fbest, fbest1, bbest, bbest1;
@@ -244,9 +249,9 @@
 
 
 /*
- * Rule: "Divide et Impera". Recursively split the box in sub-boxes by calling
- * the box splitting function. Note that the real job (marking changed lines)
- * is done in the two boundary reaching checks.
+ * Rule: "Divide et Impera" (divide & conquer). Recursively split the box in
+ * sub-boxes by calling the box splitting function. Note that the real job
+ * (marking changed lines) is done in the two boundary reaching checks.
  */
 int xdl_recs_cmp(diffdata_t *dd1, long off1, long lim1,
 		 diffdata_t *dd2, long off2, long lim2,
@@ -323,7 +328,9 @@
 	}
 
 	/*
-	 * Allocate and setup K vectors to be used by the differential algorithm.
+	 * Allocate and setup K vectors to be used by the differential
+	 * algorithm.
+	 *
 	 * One is to store the forward path and one to store the backward path.
 	 */
 	ndiags = xe->xdf1.nreff + xe->xdf2.nreff + 3;
@@ -394,8 +401,8 @@
 /*
  * If a line is indented more than this, get_indent() just returns this value.
  * This avoids having to do absurd amounts of work for data that are not
- * human-readable text, and also ensures that the output of get_indent fits within
- * an int.
+ * human-readable text, and also ensures that the output of get_indent fits
+ * within an int.
  */
 #define MAX_INDENT 200
 
@@ -429,9 +436,9 @@
 }
 
 /*
- * If more than this number of consecutive blank rows are found, just return this
- * value. This avoids requiring O(N^2) work for pathological cases, and also
- * ensures that the output of score_split fits in an int.
+ * If more than this number of consecutive blank rows are found, just return
+ * this value. This avoids requiring O(N^2) work for pathological cases, and
+ * also ensures that the output of score_split fits in an int.
  */
 #define MAX_BLANKS 20
 
@@ -443,8 +450,8 @@
 	int end_of_file;
 
 	/*
-	 * How much is the line immediately following the split indented (or -1 if
-	 * the line is blank):
+	 * How much is the line immediately following the split indented (or -1
+	 * if the line is blank):
 	 */
 	int indent;
 
@@ -454,8 +461,8 @@
 	int pre_blank;
 
 	/*
-	 * How much is the nearest non-blank line above the split indented (or -1
-	 * if there is no such line)?
+	 * How much is the nearest non-blank line above the split indented (or
+	 * -1 if there is no such line)?
 	 */
 	int pre_indent;
 
@@ -581,13 +588,13 @@
 
 /*
  * Compute a badness score for the hypothetical split whose measurements are
- * stored in m. The weight factors were determined empirically using the tools and
- * corpus described in
+ * stored in m. The weight factors were determined empirically using the tools
+ * and corpus described in
  *
  *     https://github.com/mhagger/diff-slider-tools
  *
- * Also see that project if you want to improve the weights based on, for example,
- * a larger or more diverse corpus.
+ * Also see that project if you want to improve the weights based on, for
+ * example, a larger or more diverse corpus.
  */
 static void score_add_split(const struct split_measurement *m, struct split_score *s)
 {
@@ -809,13 +816,16 @@
 	group_init(xdfo, &go);
 
 	while (1) {
-		/* If the group is empty in the to-be-compacted file, skip it: */
+		/*
+		 * If the group is empty in the to-be-compacted file, skip it:
+		 */
 		if (g.end == g.start)
 			goto next;
 
 		/*
 		 * Now shift the change up and then down as far as possible in
-		 * each direction. If it bumps into any other changes, merge them.
+		 * each direction. If it bumps into any other changes, merge
+		 * them.
 		 */
 		do {
 			groupsize = g.end - g.start;
@@ -858,17 +868,17 @@
 		 * If the group can be shifted, then we can possibly use this
 		 * freedom to produce a more intuitive diff.
 		 *
-		 * The group is currently shifted as far down as possible, so the
-		 * heuristics below only have to handle upwards shifts.
+		 * The group is currently shifted as far down as possible, so
+		 * the heuristics below only have to handle upwards shifts.
 		 */
 
 		if (g.end == earliest_end) {
 			/* no shifting was possible */
 		} else if (end_matching_other != -1) {
 			/*
-			 * Move the possibly merged group of changes back to line
-			 * up with the last group of changes from the other file
-			 * that it can align with.
+			 * Move the possibly merged group of changes back to
+			 * line up with the last group of changes from the
+			 * other file that it can align with.
 			 */
 			while (go.end == go.start) {
 				if (group_slide_up(xdf, &g, flags))
@@ -879,14 +889,15 @@
 		} else if (flags & XDF_INDENT_HEURISTIC) {
 			/*
 			 * Indent heuristic: a group of pure add/delete lines
-			 * implies two splits, one between the end of the "before"
-			 * context and the start of the group, and another between
-			 * the end of the group and the beginning of the "after"
-			 * context. Some splits are aesthetically better and some
-			 * are worse. We compute a badness "score" for each split,
-			 * and add the scores for the two splits to define a
-			 * "score" for each position that the group can be shifted
-			 * to. Then we pick the shift with the lowest score.
+			 * implies two splits, one between the end of the
+			 * "before" context and the start of the group, and
+			 * another between the end of the group and the
+			 * beginning of the "after" context. Some splits are
+			 * aesthetically better and some are worse. We compute
+			 * a badness "score" for each split, and add the scores
+			 * for the two splits to define a "score" for each
+			 * position that the group can be shifted to. Then we
+			 * pick the shift with the lowest score.
 			 */
 			long shift, best_shift = -1;
 			struct split_score best_score;