The automatic detection of heap-induced data dependencies is major challenge for current parallelizing compilers. Currently, optimizing compilers lack enough context to expose parallelism in scientific codes that make use of dynamic data structures, those allocated at runtime and stored in the heap. Traditionally, it is believed that few static assumptions can be made of runtime structures, and those that can be made are usually not useful enough for aggressive optimization. However, we show in this paper that a precise underlying shape analysis technique, which accurately captures the shape of data structures at compile-time, can provide sufficient information to identify independent heap accesses in challenging benchmarks. The result is that hard-to-find parallelism, unknown to current parallelizing compilers, is exposed and exploited thanks to our technique.