Accessing Cassandra data

Once configuration is all set, first we import a table locally into a DataFrame. A DataFrame is a datatype used in Spark that has an enhanced version of RDD, with additional structure (metadata) to it. For example, assume the preloaded schema and data for everything is in a Docker image. Let's say our marketing team wants to send personalized email notifications to all users who have purchased items that have offers currently. We need to join the itemid column from orders and the offers table in the demo schema. Refer to the PySpark API docs for further information at Spark: Python API Docs: https://spark.apache.org/docs/latest/api/python/index.html.

The commands are as follows:

_keyspace = 'demo'offers = sqlContext.read.format('org.apache.spark.sql.cassandra').load(table='offers', ...

Get Mastering Apache Cassandra 3.x - Third Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.