Sometimes yes, sometimes no. It isn't just a matter of processor architecture: the compiler matters, the compiler version matters, the optimization options matter, the system libraries matter...
If the code (including any potentially-buggy dependency) is written in a memory-safe language (i.e. most programming languages in widespread use other than C and C++), its behavior is largely predictable and platform-independent. But if the code is not written in a memory-safe language, then whether a bug in the source code translates into a runtime bug depends on the exact details of how the code is compiled and executed.
For example, suppose that the code copies a null-terminated string but omits the null terminator. Whether this turns out to be a problem depends on what is immediately after the result in memory. It may happen that in a certain build, the buffer containing the string previously contained all-zero, and therefore there is no vulnerability. But in a different build, or with a different runtime environment, you won't have this luck.
Tools like Address Sanitizer (ASan) and Valgrind instrument the binary to try to make memory problems more likely to manifest, at the expense of performance and memory usage. They use techniques such as reserving space between buffers so that a buffer overflow affects the reserved space rather than some other variable, and keeping copies of the expected content of some memory and checking them to detect if they're modified when they shouldn't be. First run those tools on sample inputs to weed out the most obvious problems. Then run those tools on inputs produced by your fuzzer.
x86 is usually a better platform for fuzzing because there are more tools available and because fuzzing is very demanding in CPU time. But if you have an arm cluster, you can use an arm cluster. When you combine the diversity of inputs from a fuzzer and the memory organization of a memory error detection tool, few memory errors will go undetected no matter what the platform is.