odb: guard against data loss checking out a huge file
This introduces an additional guard for platforms where `unsigned long`
and `size_t` are not of the same size. If the size of an object in the
database would overflow `unsigned long`, instead we now exit with an
error.
A complete fix will have to update _many_ other functions throughout the
codebase to use `size_t` instead of `unsigned long`. It will have to be
implemented at some stage.
This commit puts in a stop-gap for the time being.
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Matt Cooper <vtbassmatt@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
diff --git a/object-file.c b/object-file.c
index f233b44..70e456f 100644
--- a/object-file.c
+++ b/object-file.c
@@ -1344,7 +1344,7 @@ static int parse_loose_header_extended(const char *hdr, struct object_info *oi,
unsigned int flags)
{
const char *type_buf = hdr;
- unsigned long size;
+ size_t size;
int type, type_len = 0;
/*
@@ -1388,12 +1388,12 @@ static int parse_loose_header_extended(const char *hdr, struct object_info *oi,
if (c > 9)
break;
hdr++;
- size = size * 10 + c;
+ size = st_add(st_mult(size, 10), c);
}
}
if (oi->sizep)
- *oi->sizep = size;
+ *oi->sizep = cast_size_t_to_ulong(size);
/*
* The length must be followed by a zero byte