Page Menu
Home
c4science
Search
Configure Global Search
Log In
Files
F91590017
pi.slurm
No One
Temporary
Actions
Download File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Subscribers
None
File Metadata
Details
File Info
Storage
Attached
Created
Tue, Nov 12, 12:34
Size
749 B
Mime Type
text/x-shellscript
Expires
Thu, Nov 14, 12:34 (1 d, 23 h)
Engine
blob
Format
Raw Data
Handle
22289186
Attached To
rSCEXAMPLES SCITAS examples on how to run on the clusters
pi.slurm
View Options
#!/bin/bash
#
#SBATCH --nodes=2
# ntasks per node MUST be one, because multiple slaves per work doesn't
# work well with slurm + spark in this script (they would need increasing
# ports among other things)
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=24
#SBATCH --mem=8192
# Beware! $HOME will not be expanded and invalid paths will result Slurm jobs
# hanging indefinitely with status CG (completing) when calling scancel!
#SBATCH --time=00:30:00
module load spark
echo "---- starting $0 on $HOSTNAME"
echo
MASTER_NODE=""
start-spark.sh
echo "configuration done..."
set -x
MASTER_IP=$(cat ${SLURM_JOBID}_spark_master)
echo $MASTER_IP
time time spark-submit \
--executor-memory 5G \
--master $MASTER_IP \
./pi.py
stop-spark.sh
Event Timeline
Log In to Comment