So far we've established that we can convert a list of images in a special format in to a slice of slices of float64. Recall from earlier, that when you stack neurons together they form a matrix, and the activation of a neural layer is simply a matrix-vector multiplication. And when the inputs are stacked together, it's simply matrix-matrix multiplication.
We technically can build a neural network with just [][]float64. But the end result will be quite slow. Collectively as a species, we have had approximately 40 years of developing algorithms for efficient linear algebra operations, such as matrix multiplication and matrix-vector multiplication. This collection of algorithms are generally known as BLAS (Basic Linear ...