So the function py_calculate_crc8 is called with a bytes object of length 1. This object is parsed via format y* into the Py_buffer “data”. The segfault occurs when data.buf[0] is accessed. But all tests pass on WIndows and macos.
start_value (parameter type “I”) should be unsigned int, not uint8_t (that’s too small).
first_call (parameter type “p”) should be int, not bool (I’m not sure of the size of bool, but it could be 1 byte, whereas Python expects an int, so probably too small).
I actually tried “b” instead of “I” on the other git branch. I could try changing bool to something else tomorrow. I actually assumed, that PyArg_ParseTupleAndKeywords automatically casts to the correct size.
I also tried incrementing the ref count of “args” but that didn’t help either.
You pass in pointers to variables and use the format argument to tell it what types the arguments are, and, therefore, how big the pointed-to variables are. If you tell it that it’s “I” (unsigned int) but it’s actually uint8_t, don’t be surprised if it writes something beyond the end of the variable!
Another issue I’ve only just realised: I’m not sure about the PyBuffer_Release(&data);. A buffer is being passed in, right? But whose responsibility is it to release that buffer? You or the caller? I’d probably assume it was the caller’s responsibility.
However, when a Py_buffer structure gets filled, the underlying buffer is locked so that the caller can subsequently use the buffer even inside a Py_BEGIN_ALLOW_THREADS block without the risk of mutable data being resized or destroyed. As a result, you have to callPyBuffer_Release() after you have finished processing the data (or in any early abort case).