Wednesday, February 1, 2023

Transformations in PySpark, two types of transformations and an example of flatMap method

What Are Transformations?

In Spark, the core data structures are immutable meaning they cannot be changed once created. This might seem like a strange concept at first, if you cannot change it, how are you supposed to use it? In order to “change” a DataFrame you will have to instruct Spark how you would like to modify the DataFrame you have into the one that you want. These instructions are called transformations. Transformations are the core of how you will be expressing your business logic using Spark. There are two types of transformations, those that specify narrow dependencies and those that specify wide dependencies.

What Are Narrow Dependencies?

Transformations consisting of narrow dependencies [we’ll call them narrow transformations] are those where each input partition will contribute to only one output partition.

What Are Wide Dependencies?

A wide dependency [or wide transformation] style transformation will have input partitions contributing to many output partitions. You will often hear this referred to as a shuffle where Spark will exchange partitions across the cluster. With narrow transformations, Spark will automatically perform an operation called pipelining on narrow dependencies, this means that if we specify multiple filters on DataFrames they’ll all be performed in-memory. The same cannot be said for shuffles. When we perform a shuffle, Spark will write the results to disk. You’ll see lots of talks about shuffle optimization across the web because it’s an important topic but for now all you need to understand are that there are two kinds of transformations.
databricks.com

pyspark.RDD.flatMap

Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results. spark.apache.org/docs On side note: flatMap() is a narrow transformation, i.e.: Data computation happens within the same partition of the parent RDD Data does not get shuffled across the RDD partitions Examples are map(), flatMap(), filter() Other type of transformations is: wide transformation Records that need to be computed are scattered across the partitions of the parent RDD Data gets shuffled while processing Examples are groupByKey(), reduceByKey()
Tags: Spark,Technology,