ISO C and C++ allow implementations to use a non-zero bit-pattern as the object representation for a null pointer, despite requiring that a literal 0 or (void*)0 in the source (in a pointer context) is evaluated as a null pointer, equivalent to NULL. Reasoning based on source definitions like #define NULL 0 is not sufficient in C or C++.
But fortunately for everyone's sanity, all modern C and C++ implementations for x86 (and other modern ISAs) do use 0 in asm as the bit-pattern for NULL. This makes non-portable code like memset(ptr_array, 0, size) work as expected, equivalent to a loop that sets each element to NULL.
When was the NULL macro not 0? is asking about source-level non-zero definitions, but I think that's not allowed in modern C. The answers mention several historical machines that had non-zero null pointer bit-patterns. (i.e. what you'd see in the asm for code like do {...} while(p = p->next);)
Remember that in asm, pointers are just 64-bit (or 32-bit) integers. The whole idea of NULL is in-band signalling, not some special thing that isn't even a pointer-sized integer. So we have to pick some constant.
0 is a convenient sentinel value because many ISAs can branch slightly more efficiently on a value being non-zero than checking for any other value. e.g. ARM has cbnz to branch on non-zero without needing a cmp. x86 has a minor code-size optimization of test eax, eax / jnz instead of cmp eax, 0 / jnz. (Test whether a register is zero with CMP reg,0 vs OR reg,reg?). If FLAGS are already set by another arithmetic instruction, no test would be needed, but that's unusual for null pointer tests: usually you don't do math on a pointer and then for NULL.
(You're not seeing that optimization in your asm because your debug build stores to memory before testing.)
Also, 0 is easy to generate. Some large number might take a larger instruction, or most instructions, to create in a register. (e.g. x86 xor eax,eax instead of mov eax, imm32). And zero-initialized static storage like static int *table = NULL; can be in the BSS instead of .data - modern systems zero-init the BSS.
On some systems (especially embedded) the 0 address isn't special, and you actually have system-management stuff there, like the start of a table of interrupt handlers. So 0 can be a valid address, as well as being equal to NULL. This kinda sucks, so this is where one might actually want a non-zero object representation for null pointers. @Simon Richter comments about hacking an ARM compiler to use 0x20000000 as the NULL bit-pattern.
On systems using virtual memory (like Windows), we can simply avoid ever mapping the page containing that address, which helps detect bugs by making sure NULL-dereference actually faults. (Remember that undefined behaviour in C and C++ is not required to fault, but it's certainly convenient if it does.)