0

I am writing a max-ppoling and I want to do back propogation. I write as below:

def _backward(self, grad):
        grad_X = np.zeros(self.X_shape)
        batch_sz, in_c, _, _ = self.X_shape
        s_h, s_w = self.stride
        k_h, k_w = self.kernel_shape
        h, w = self.X_shape[2:4]
        out_h = (h - k_h) // s_h + 1
        out_w = (w - k_w) // s_w + 1
        for r in range(out_h):
            r_start = r * s_h
            for c in range(out_w):
                c_start = c * s_w
                pool = grad_X[:, :, r_start: r_start+k_h, c_start: c_start+k_w]
                pool = pool.reshape((batch_sz, in_c, -1))
                #pool[np.arange(batch_sz), np.arange(in_c), self.argmax[:, :, r, c]] = grad[:, :, r, c]
                for b in range(batch_sz):
                    for ch in range(in_c):
                        #pool[b, ch, self.argmax[b, ch, r, c]] = grad[b, ch, r, c]
                        pool[b, ch, self.argmax[b, ch, r, c]] = 1
                        print(pool)
                        #print(grad_X)
        self.grad_X = grad_X
        print(grad_X)
        return self.grad_X

and I find that although I make change to pool, I cannot make change to grad_X, and the output self.grad_X is always all zeros. Can anyone tell me why?

dubugger
  • 51
  • 4
  • That `pool.reshape` is making a copy. I illustrate this in https://stackoverflow.com/questions/71302136/get-a-flat-view-of-a-numpy-array-or-an-exception#71302871 – hpaulj Mar 04 '22 at 07:49

0 Answers0