Hi Mark,
On Fri, Jan 20, 2023 at 03:18:48PM +0000, Mark Rutland wrote:
On Fri, Jan 20, 2023 at 03:09:42PM +0100, Yann Sionneau wrote:
> Add common headers (atomic, bitops, barrier and locking) for basic
> kvx support.
>
> Co-developed-by: Clement Leger <clement(a)clement-leger.fr>
> Signed-off-by: Clement Leger <clement(a)clement-leger.fr>
> Co-developed-by: Jules Maselbas <jmaselbas(a)kalray.eu>
> Signed-off-by: Jules Maselbas <jmaselbas(a)kalray.eu>
> Co-developed-by: Julian Vetter <jvetter(a)kalray.eu>
> Signed-off-by: Julian Vetter <jvetter(a)kalray.eu>
> Co-developed-by: Julien Villette <jvillette(a)kalray.eu>
> Signed-off-by: Julien Villette <jvillette(a)kalray.eu>
> Co-developed-by: Yann Sionneau <ysionneau(a)kalray.eu>
> Signed-off-by: Yann Sionneau <ysionneau(a)kalray.eu>
> ---
>
> Notes:
> V1 -> V2:
> - use {READ,WRITE}_ONCE for arch_atomic64_{read,set}
> - use asm-generic/bitops/atomic.h instead of __test_and_*_bit
> - removed duplicated includes
> - rewrite xchg and cmpxchg in C using builtins for acswap insn
Thanks for those changes. I see one issue below (instantiated a few times), but
other than that this looks good to me.
[...]
> +#define ATOMIC64_RETURN_OP(op, c_op) \
> +static inline long arch_atomic64_##op##_return(long i, atomic64_t *v) \
> +{ \
> + long new, old, ret; \
> + \
> + do { \
> + old = v->counter; \
This should be arch_atomic64_read(v), in order to avoid the potential for the
compiler to replay the access and introduce ABA races and other such problems.
Thanks for the suggestion, this will be into v3.
I took a look at the generic 32-bit atomic, but I am unsure if this
needs to be done for both the SMP and non-SMP implementations. But I
can send a first patch and we can discuss from there.
> + new = old c_op i; \
> + ret = arch_cmpxchg(&v->counter, old, new); \
> + } while (ret != old); \
> + \
> + return new; \
> +}
> +
> +#define ATOMIC64_OP(op, c_op) \
> +static inline void arch_atomic64_##op(long i, atomic64_t *v) \
> +{ \
> + long new, old, ret; \
> + \
> + do { \
> + old = v->counter; \
Likewise, arch_atomic64_read(v) here.
ack
> + new = old c_op i; \
> + ret = arch_cmpxchg(&v->counter, old, new); \
> + } while (ret != old); \
> +}
> +
> +#define ATOMIC64_FETCH_OP(op, c_op) \
> +static inline long arch_atomic64_fetch_##op(long i, atomic64_t *v) \
> +{ \
> + long new, old, ret; \
> + \
> + do { \
> + old = v->counter; \
Likewise, arch_atomic64_read(v) here.
ack
> + new = old c_op i; \
> + ret = arch_cmpxchg(&v->counter, old, new); \
> + } while (ret != old); \
> + \
> + return old; \
> +}
> +
> +#define ATOMIC64_OPS(op, c_op) \
> + ATOMIC64_OP(op, c_op) \
> + ATOMIC64_RETURN_OP(op, c_op) \
> + ATOMIC64_FETCH_OP(op, c_op)
> +
> +ATOMIC64_OPS(and, &)
> +ATOMIC64_OPS(or, |)
> +ATOMIC64_OPS(xor, ^)
> +ATOMIC64_OPS(add, +)
> +ATOMIC64_OPS(sub, -)
> +
> +#undef ATOMIC64_OPS
> +#undef ATOMIC64_FETCH_OP
> +#undef ATOMIC64_OP
> +
> +static inline int arch_atomic_add_return(int i, atomic_t *v)
> +{
> + int new, old, ret;
> +
> + do {
> + old = v->counter;
Likewise, arch_atomic64_read(v) here.
ack, this will bt arch_atomic_read(v) here
since this is not atomic64_t
here.
Thanks
-- Jules