blob: d1daa1f62639a1bbd37e4aedf6f51c1d8d831b63 [file] [log] [blame]
This directory provides examples of Coccinelle (http://coccinelle.lip6.fr/)
semantic patches that might be useful to developers.
There are two types of semantic patches:
* Using the semantic transformation to check for bad patterns in the code;
The target 'make coccicheck' is designed to check for these patterns and
it is expected that any resulting patch indicates a regression.
The patches resulting from 'make coccicheck' are small and infrequent,
so once they are found, they can be sent to the mailing list as per usual.
Example for introducing new patterns:
67947c34ae (convert "hashcmp() != 0" to "!hasheq()", 2018-08-28)
b84c783882 (fsck: s/++i > 1/i++/, 2018-10-24)
Example of fixes using this approach:
248f66ed8e (run-command: use strbuf_addstr() for adding a string to
a strbuf, 2018-03-25)
f919ffebed (Use MOVE_ARRAY, 2018-01-22)
These types of semantic patches are usually part of testing, c.f.
0860a7641b (travis-ci: fail if Coccinelle static analysis found something
to transform, 2018-07-23)
* Using semantic transformations in large scale refactorings throughout
the code base.
When applying the semantic patch into a real patch, sending it to the
mailing list in the usual way, such a patch would be expected to have a
lot of textual and semantic conflicts as such large scale refactorings
change function signatures that are used widely in the code base.
A textual conflict would arise if surrounding code near any call of such
function changes. A semantic conflict arises when other patch series in
flight introduce calls to such functions.
So to aid these large scale refactorings, semantic patches can be used.
However we do not want to store them in the same place as the checks for
bad patterns, as then automated builds would fail.
That is why semantic patches 'contrib/coccinelle/*.pending.cocci'
are ignored for checks, and can be applied using 'make coccicheck-pending'.
This allows to expose plans of pending large scale refactorings without
impacting the bad pattern checks.
Git-specific tips & things to know about how we run "spatch":
* The "make coccicheck" will piggy-back on
"COMPUTE_HEADER_DEPENDENCIES". If you've built a given object file
the "coccicheck" target will consider its depednency to decide if
it needs to re-run on the corresponding source file.
This means that a "make coccicheck" will re-compile object files
before running. This might be unexpected, but speeds up the run in
the common case, as e.g. a change to "column.h" won't require all
coccinelle rules to be re-run against "grep.c" (or another file
that happens not to use "column.h").
To disable this behavior use the "SPATCH_USE_O_DEPENDENCIES=NoThanks"
flag.
* To speed up our rules the "make coccicheck" target will by default
concatenate all of the *.cocci files here into an "ALL.cocci", and
apply it to each source file.
This makes the run faster, as we don't need to run each rule
against each source file. See the Makefile for further discussion,
this behavior can be disabled with "SPATCH_CONCAT_COCCI=".
But since they're concatenated any <id> in the <rulname> (e.g. "@
my_name", v.s. anonymous "@@") needs to be unique across all our
*.cocci files. You should only need to name rules if other rules
depend on them (currently only one rule is named).
* To speed up incremental runs even more use the "spatchcache" tool
in this directory as your "SPATCH". It aimns to be a "ccache" for
coccinelle, and piggy-backs on "COMPUTE_HEADER_DEPENDENCIES".
It caches in Redis by default, see it source for a how-to.
In one setup with a primed cache "make coccicheck" followed by a
"make clean && make" takes around 10s to run, but 2m30s with the
default of "SPATCH_CONCAT_COCCI=Y".
With "SPATCH_CONCAT_COCCI=" the total runtime is around ~6m, sped
up to ~1m with "spatchcache".
Most of the 10s (or ~1m) being spent on re-running "spatch" on
files we couldn't cache, as we didn't compile them (in contrib/*
and compat/* mostly).
The absolute times will differ for you, but the relative speedup
from caching should be on that order.