Splitting, slicing, sorting, filtering, and grouping DataFrames over Spark

This recipe shows how to filter, slice, sort, index, and group Pandas DataFrames as well as Spark DataFrames.

Getting ready

To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, have Python and IPython installed on the Linux machine, that is, Ubuntu 14.04.

How to do it…

  1. Invoke ipython console -profile=pyspark as follows:
            In [4]: from pyspark import SparkConf, SparkContext, SQLContext
            In [5]: import pandas as pd
    
  2. Creating a Pandas DataFrame as follows:
           In [6]: pdf = pd.DataFrame({'Name':['Padma','Major','Priya'], 
                                       'Age':  [23,45,30]}) 
    
  3. Creating a Spark DataFrame ...

Get Apache Spark for Data Science Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.