array module claims to be efficient. So it should offer
wrappers for something similar to
malloc. Otherwise initializing an array is not optimally efficient.
I see arrays as typed lists, not statically-sized. If you want programmer-controlled memory management, use a ctypes array (
ctypes.c_uint * 20000), or NumPy ndarray
I don’t think it makes sense to directly add
malloc - they’re return raw memory and Python wouldn’t know what to do with it.
I do think the array module is missing a way to construct an array that isn’t initialized from a list. Essentially equivalent to
np.empty. That’s seemed like a genuinely useful missing feature in the past.
array('I', ) * 10000
That look like what I want. I wouldn’t have found that by myself though.
I don’t mean statically sized. I mean that initialization of arrays currently is bad.
For typecodes larger than 1 byte, such as the
'I' you use here, this reads that initial value
0 from memory 10000 times in order to copy it 10000 times. That’s unnecessary overhead, especially if the desired array size is much larger than the 10000 in your example.
Are you commenting on the C implementation of array from code inspection or from benchmarking?
Do you have a PoC that you can benchmark to show the problem?
Code inspection. Here
memset is used only for very small data types, whereas data types like
In newer versions of CPython, this is refactored, but not changed.
I guess that something like
array('Q', ) * N with a sufficiently large
N is slower than
calloc. Moreover, something like
malloc, i.e. just initializing a large array with unknown values (so that filling it iteratively later has no overhead, as opposed to growing the array), is not available at all.
Calloc returns a block of memory that has been zeroed out.
Now if you want 0 in it that’s great.
But if you want int(4173) in 32 bit int then it’s a waste of time having calloc zero the memory only for the array code to overwrite it.
I am not seeing how calloc helps.
This is for cases where the later code iteratively fills the array (which is very common). If the array is large at initialization (as I propose, consisting of zeros or of whatever used to be in that part of memory), writing values into the array is fast.
On the other hand, if the array isn’t large at initialization, then the array needs to be grown iteratively as the later code writes values into it iteratively. Dynamically increasing the size of the array can have considerable overhead.
You said " Moreover, something like
malloc, i.e. just initializing a large array with unknown values".
This not what malloc does. It does not write any values into the memory it allocates at all.
Given that malloc can return memory that was previously used and freed it is not safe to assume what is in the memory.
Using calloc will write zeros into the memory that is returned for the cases where that is
But this code does not need that zeroing as it makes sure that each bytes of the returned memory is initialised.
Calloc will slow down that code as it doubles the number of writes to memory by using it.
1 write of 0 and 1 write of init data.
Are you asking for a special case for init data that is all 0 for wider then a byte?
Would would check that np->ob_item is all 0 and then use a memset?
I know. We mean the same thing. By “initializing a large array with unknown values”, I mean that a large array variable is created, and the values are not written into it, they are whatever there was in memory. For speed. I mean “initializing” as in
__init__ (create an array variable), not necessarily writing stuff into it.
I am not assuming what is in the memory. Anything can be there.
Yes, if you call
calloc a “special case” of creating an array. It’s one of the most normal ways to create an array in many other libraries.
No. Instead, write a
calloc-like method (for example called
array.zeros, similarly to
numpy.zeros) and a
malloc-like method (for example called
array.malloc) for the Python
-1 on having a method that exposes uninitialised memory. I’ve no particular opinion on the calloc method, other than to note that if you’re sufficiently concerned about performance that
array('Q', ) * N is too slow, you probably want something better than the array module, such as numpy, in any case.
array('Q', ) * N is much slower than it could be, and it is a bottleneck in your program, create a PR with optimization of this case. I do not promise that it will be accepted, it depends on the benefit/complexity ratio, but it is worth to try. If it does not help, try to use NumPy. If it does not help, then perhaps Python is a wrong tool for solving your problems.