In order to understand the programming API of spark, we should have a sample dataset on which we can perform some operations to gain confidence. In order to generate this dataset, we will import the sample table from the employees database from the previous chapter.
These are the instructions we follow to generate this dataset:
Log in to the server and switch to Hive user:
ssh user@node-3[user@node-3 ~]$ sudo su - hive
This will put us in a remote shell, where we can dump the table from the MySQL database:
[hive@node-3 ~]$ mysql -usuperset -A -psuperset -h master employees -e "select * from vw_employee_salaries" > vw_employee_salaries.tsv [hive@node-3 ~]$ wc -l vw_employee_salaries.tsv 2844048 vw_employee_salaries.tsv ...