[PATCH] atomic: inc_not_zero
Introduce an atomic_inc_not_zero operation. Make this a special case of
atomic_add_unless because lockless pagecache actually wants
atomic_inc_not_negativeone due to its offset refcount.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
diff --git a/Documentation/atomic_ops.txt b/Documentation/atomic_ops.txt
index f174416..23a1c24 100644
--- a/Documentation/atomic_ops.txt
+++ b/Documentation/atomic_ops.txt
@@ -115,7 +115,7 @@
is negative. It requires explicit memory barrier semantics around the
operation.
-Finally:
+Then:
int atomic_cmpxchg(atomic_t *v, int old, int new);
@@ -129,6 +129,18 @@
The semantics for atomic_cmpxchg are the same as those defined for 'cas'
below.
+Finally:
+
+ int atomic_add_unless(atomic_t *v, int a, int u);
+
+If the atomic value v is not equal to u, this function adds a to v, and
+returns non zero. If v is equal to u then it returns zero. This is done as
+an atomic operation.
+
+atomic_add_unless requires explicit memory barriers around the operation.
+
+atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
+
If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces are