Filtering and aggregating Spark datasets 

To manipulate the table's dataset, we need to use the verb commands from dplyr. They will automatically be translated as SQL statements if you are connected to a DataFrame. I think here the best way to understand how it works is with an example, so let's run the following code:

mod_stvincent <- dt_stVincent %>% select(code, id, harvwt) %>%   filter(harvwt > 15) %>% arrange(desc(id))

The select function is used to choose the code, id, and harvwt columns from our dt_stVincent table object. The filter function is added to the code to choose only the row lines where harvwt is bigger than 15. In the end, arrange is used to set the order. You can also use summarise() as aggregators query and mutate() as ...

Get Hands-On Data Science with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.