I’m going to write up the current state of PNDArrays, then describe where I think they ought to go / things I’m unsure about.
Currently, a PNDArray is very closely modeled after Numpy. It is internally a struct with 5 fields. Flags, offset, shape, strides, and a data array.
Flags is currently unused in our code, but numpy uses it to store these things: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flags.html. It’s not clear to me that these are relevant to us.
Offset is currently unused in our code. For numpy, it’s the number of bytes into the data region where reading of data should start. You could imagine that slicing an ndarray to remove the first few elements, or asking to read only odd indexed elements would result in this being used.
Shape is definitely needed, it’s a tuple that describes the shape of the PNDArray. The number of elements of the tuple describes the number of dimensions (i.e. (3, 4, 5)) is a 3 x 4 x 5 tensor.
Strides encodes how many bytes it would take to move to the next element in a dimension. For example, a standard row major 4 x 5 matrix would have stride (40, 8), because it takes 40 bytes to move along a row (8 * 5) and 8 bytes to move along a column.
The data array is exactly what it sounds like, a flat array of all the data in the PNDArray. It’s length is the product of the shape tuple. Currently, we lay this out in row major.
Current questions / proposed changes:
-
I am thinking that maybe PNDArrays should be flipped to be stored in column major. Right now every LAPACK call involves flipping from row major to column major and back again, without any clear benefit. The only reason we use row major is because that’s what numpy does. This is the closest I’ve found to a reason why Numpy does this: https://docs.scipy.org/doc/numpy-1.14.0/reference/internals.html . My take aways are that the pros to doing row major are that it’s the natural thing in python and C, it’s more convenient for writing and reading (If you are writing out a matrix line by line, and it’s stored row by row, this is very easy), and it’s convenient for images. The major con though is that all Fortran code, including LAPACK, uses column major. As such, R, Matlab, and Julia all adopted column major conventions to more easily interface with BLAS/LAPACK.
-
It’s not clear to me that we need flags. The immutability of our expressions seems to obviate the need for them.
-
In the world we live in where we are forced to copy matrices between python and scala, it’s not clear to me that we get value from “offset”, since we aren’t really taking advantage of having “views” into the data. With that in mind, maybe we should get rid of it. The same argument could potentially be made for strides, though strides can be useful for things like making the transpose operation easy (you can transpose by just messing with the strides, no need to actually move any data). Granted, if you use these tricks, you’re not in column major anymore.
Happy to hear any thoughts on any of the above