numpy - How to np.roll() faster? -
i'm using np.roll() nearest-neighbor-like averaging, have feeling there faster ways. here simplified example, imagine 3 dimensions , more complex averaging "stencils". example see section 6 of paper.
here few lines simplified example:
for j in range(nper): phi2 = 0.25*(np.roll(phi, 1, axis=0) + np.roll(phi, -1, axis=0) + np.roll(phi, 1, axis=1) + np.roll(phi, -1, axis=1) ) phi[do_me] = phi2[do_me]
so should looking returns views instead of arrays (as seems roll returns arrays)? in case roll initializing new array each time it's called? noticed overhead huge small arrays.
in fact it's efficient arrays of [100,100] [300,300] in size on laptop. possibly caching issues above that.
would scipy.ndimage.interpolation.shift()
perform better, way implemented here, , if so, fixed? in linked example above, i'm throwing away wrapped parts anyway, might not always.
note: in question i'm looking available within numpy / scipy. of course there many ways speed python , numpy, that's not i'm looking here, because i'm trying understand numpy better. thanks!
Comments
Post a Comment