When writing applications using Spark, developers have the option to use SQL on structured data to get the desired results. An example makes this easier for us to understand how to do this:
[hive@node-3 ~]$ cat SQLApp.py from pyspark.sql import SparkSession # Path to the file in HDFS csvFile = "employees.csv" # Create a session for this application spark = SparkSession.builder.appName("SQLApp").getOrCreate() # Read the CSV File csvTable = spark.read.format("csv").option("header", "true").option("delimiter", "\t").load(csvFile) csvTable.show(3) # Create a temporary view csvView = csvTable.createOrReplaceTempView("employees") # Find the total salary of employees and print the highest salary makers highPay = spark.sql("SELECT ...