Working in heterogeneous computing environments (and having first-class facilities for that), while still being easy enough to immediately start adapting numerical code from any reference implementation.
Roughly, the sets of computational problems that people used (use?) MPI for. Things like numerical solvers for sparse matrices that are so big that you need to split them across your entire cluster. These still require a lot of node-to-node communication, and on top of it, the pattern is dependent on each problem (so easy solutions like map-reduce are effectively out). See eg https://www.open-mpi.org/, and https://courses.csail.mit.edu/18.337/2005/book/Lecture_08-Do... for the prototypical use case.
My answer would be that Chapel supports a partitioned global namespace such that a variable within the lexical scope of a given statement can be referenced whether it is local to that CPU's memory, stored on a remote compute node, or stored within a GPU's memory (say). The compiler and runtime implement the communication on the programmer's behalf and take steps to optimize away unnecessary communication. Other key features include first-class support for creating parallel tasks in high-level ways, including parallel loops.