Is this actually innovative? I respect that there’s a lot of work in making it reality and doing it specifically for AI training by modifying their algorithms. But doing portions of work in clusters that are far apart and combining them has been done many times before for non AI things, right? Or so I would think.
You're right: the MapReduce pattern is very old, and it is well-known that applying it to AI training to enable geographically distributed training runs would be very beneficial. We haven't done it yet because model training workloads are more difficult to parallelize with high intra-node latency than a lot of traditional workloads.
This paper proposes a work partitioning scheme that removes a constraint that makes parallelizing AI training inefficient. The idea of a work partitioning scheme isn't novel, but the scheme itself is.
Generically speaking, yes, this has been done before. But it can take a lot of work to transform software that works with shared memory or other low-latency interprocess communication mechanisms so that it's practical to run across wide area networks. Sometimes that's not possible at all, which is why certain problems still require "high performance computing" architectures with all of their compute nodes in the same building, connected by high-bandwidth, low-latency links.
Is this actually innovative? I respect that there’s a lot of work in making it reality and doing it specifically for AI training by modifying their algorithms. But doing portions of work in clusters that are far apart and combining them has been done many times before for non AI things, right? Or so I would think.
You're right: the MapReduce pattern is very old, and it is well-known that applying it to AI training to enable geographically distributed training runs would be very beneficial. We haven't done it yet because model training workloads are more difficult to parallelize with high intra-node latency than a lot of traditional workloads.
This paper proposes a work partitioning scheme that removes a constraint that makes parallelizing AI training inefficient. The idea of a work partitioning scheme isn't novel, but the scheme itself is.
Generically speaking, yes, this has been done before. But it can take a lot of work to transform software that works with shared memory or other low-latency interprocess communication mechanisms so that it's practical to run across wide area networks. Sometimes that's not possible at all, which is why certain problems still require "high performance computing" architectures with all of their compute nodes in the same building, connected by high-bandwidth, low-latency links.