Page Menu
Home
c4science
Search
Configure Global Search
Log In
Files
F91589531
diffusion.slurm
No One
Temporary
Actions
Download File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Subscribers
None
File Metadata
Details
File Info
Storage
Attached
Created
Tue, Nov 12, 12:26
Size
792 B
Mime Type
text/x-shellscript
Expires
Thu, Nov 14, 12:26 (2 d)
Engine
blob
Format
Raw Data
Handle
22289049
Attached To
rSCEXAMPLES SCITAS examples on how to run on the clusters
diffusion.slurm
View Options
#!/bin/bash
# author: gilles fourestey (EPFL)
#
#SBATCH --nodes=2
# ntasks per node MUST be one, because multiple slaves per work doesn't
# work well with slurm + spark in this script (they would need increasing
# ports among other things)
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=24
#SBATCH --mem=8192
# Beware! $HOME will not be expanded and invalid paths will result Slurm jobs
# hanging indefinitely with status CG (completing) when calling scancel!
#SBATCH --time=00:30:00
module load spark
echo "---- starting $0 on $HOSTNAME"
echo
#
MASTER_NODE=""
start-spark.sh
echo "configuration done..."
set -x
MASTER_IP=$(cat ${SLURM_JOBID}_spark_master)
echo $MASTER_IP
time time spark-submit \
--executor-memory 5G \
--master $MASTER_IP \
./diffusion.py
stop-spark.sh
Event Timeline
Log In to Comment