Tuesday, February 7, 2023

Joins in PySpark using RDD, SQL DataFrame, PySpark.Pandas

Download Code and Data

Join Types

Inner Join The inner join is the default join in Spark SQL. It selects rows that have matching values in both relations. Syntax: relation [ INNER ] JOIN relation [ join_criteria ] Left Join A left join returns all values from the left relation and the matched values from the right relation, or appends NULL if there is no match. It is also referred to as a left outer join. Syntax: relation LEFT [ OUTER ] JOIN relation [ join_criteria ] Right Join A right join returns all values from the right relation and the matched values from the left relation, or appends NULL if there is no match. It is also referred to as a right outer join. Syntax: relation RIGHT [ OUTER ] JOIN relation [ join_criteria ] Full Join A full join returns all values from both relations, appending NULL values on the side that does not have a match. It is also referred to as a full outer join. Syntax: relation FULL [ OUTER ] JOIN relation [ join_criteria ] Cross Join A cross join returns the Cartesian product of two relations. Syntax: relation CROSS JOIN relation [ join_criteria ] Semi Join A semi join returns values from the left side of the relation that has a match with the right. It is also referred to as a left semi join. Syntax: relation [ LEFT ] SEMI JOIN relation [ join_criteria ] Anti Join An anti join returns values from the left relation that has no match with the right. It is also referred to as a left anti join. Syntax: relation [ LEFT ] ANTI JOIN relation [ join_criteria ] Ref: spark.apache.org

Visualizing Using Venn Diagrams

Spark provides a join() function that can join two paired RDDs based on the same key.

join(): Performs an inner join between two RDDs: firstRDD.join(laterRDD)
rightOuterJoin(): Performs join operation between two RDDs with key present in first RDD: firstRDD.rightOuterJoin(laterRDD)
leftOuterJoin(): Performs join operation between two RDDs with key present in the later RDD: firstRDD.leftOuterJoin(laterRDD)

Requirement:

Let us consider two different datasets of ArisCCNetwork RouterLocation.tsv and RouterPurchase.tsv.

Schema:

RouterLocation.tsv: RouterID, Name, Location

RouterPurchase.tsv: RouterID, Date, PrimaryMemory, SecondaryMemory, Cost

Join these two datasets to fetch Routers Location, Cost, and Memory details into a single RDD.

Implementation steps to join

Step 1: Create namedtuple classes representing datasets

Create two namedtuple representing the schema of each dataset.

Note: namedtuple is just like a dictionary. It improves the code readability by providing a way to access the values using descriptive field names.

Step 2: Generate <K,V> pairs using namedtuple

In this step,

datasets are loaded as RDDs

Paired RDDs <K, V> are created where K = common column in both RDDs, V = value part which contains a complete record 

Step 3: Apply the join() function

Spark join is applied against the grouped fields of locRDD and purRDD from the previous step. 
how: str, optional

default inner. 
Must be one of: inner, cross, outer, full, fullouter, full_outer, left, leftouter, left_outer, right, rightouter, right_outer, semi, leftsemi, left_semi, anti, leftanti and left_anti.
Tags: Spark,Technology,Database,

No comments:

Post a Comment