Which indicators would you look for in the Spark UI’s Storage tab to signal that a cached table is not performing optimally? Assume you are using Spark’s MEMORY_ONLY storage level.
- A.Size on Disk is < Size in Memory
- B.The RDD Block Name includes the “*” annotation signaling a failure to cache
- C.Size on Disk is > 0
- D.The number of Cached Partitions > the number of Spark Partitions
- E.On Heap Memory Usage is within 75% of Off Heap Memory Usage