NPRG042 Programming in Parallel Environment [2023/24]

Assignment 4


Duplicit citizens
Apache Spark
Assigned: 16.4.2024
Deadline: 29.4.2024 23:59 (CEST)
Supervisor: Jakub Yaghob
ReCodEx: Not available
speedup points
4× or less 0
4× to 6× 1
6× to 8× 2
8× to 16× 3
16× or more 4

The task is counting the number of people from a given population list with the same first and last name who live in the same region. The region can be identified by the highest postcode number.

Use containerized Spark on our SLURM parlab cluster. The external path to the Spark container image is /home/_teaching/para/04-spark/spark. Don't copy the container!

The pre-generated list seznam.csv can be found in the external path /home/_teaching/para/04-spark. The internal path to the list is /opt/data/seznam.csv.

Startup shell script

You may find a spark-slurm.sh startup shell script in the assignment directory. You can customize it to suit your solution to the problem. The script will be run using the sbatch SLURM command, therefore you can use the corresponding #SBATCH commands in the script.

In addition to your solution, place the above mentioned shell script to your ${HOME}/submit_box directory. It must have the name spark-slurm.sh, as it will be used in automatic testing.

The script will be called with three parameters:

  1. External path to the Spark container image. It has fixed (external) path /home/_teaching/para/04-spark/spark.
  2. Name of the network interface for Spark communication. It has fixed name eno1 for the mpi-homo-short SLURM partition.
  3. Read-write directory. The directory will be bound as /mnt/1 directory inside the container.

Your solution can have any name you want, you just need to modify the script accordingly. The solution will by placed in the above mentioned R/W directory, therefore it will have internal path e.g. /mnt/1/mysolution.py

Output format

The output file must have the name output.csv. The output file is in CSV format, data separated by lines, no header, only LF. The first column is the "region number", i.e. just the highest number from the postcode. This column is sorted in ascending order. The second column is the number of collisions in that region. Write the output file to the above mentione R/W directory inside the container, i.e. use internal name /mnt/1/output.csv.

Testing

Everything in the submit_box is MOVED to a target test directory. Please, do not develop your code there, as everything will be moved away from your this folder. seznam.csv is added to the target test directory (only symlink from internal path /mnt/1/seznam.csv to internal path /opt/data/seznam.csv. Execute sbatch with your spark-slurm.sh from the target test directory with three above mentioned parameters. The script will be executed on the mpi-homo-short partition. The script has to run your application, i.e. the application is hardcoded in the script (currently the script accepts the 4th parameter). The application writes the output to the file /mnt/1/output.csv, where it is taken by the test script in the target test directory and compared with the correct result.

Counting duplicit citizens

In the first step, find number of collisions for all pairs { name, surname } in a region, i.e. how many citizens with the same pair is in the same region. In the second step, compute resulting number for the region. You can use two methods:

  1. Count all 1-collision pairs and substract them from the number of all citizens in the region
  2. Add all 2(and more)-collisions together in the region