Stern Center for Research Computing

New York University • Leonard Stern School of Business

GRID Processing at Stern


SCRC supports a moderate size collection of computers, called the Stern GRID, which allow researchers to attack problems that are computationally challenging or that require large amounts of disk storage. To distribute the software and hardware resources equitably a scheduling system called the Sun Grid Engine (SGE) is used.

How Do I Know If I Should Use the Stern GRID

Here are some typical examples for using the Stern GRID.

  • You expect your job to run longer than an hour.
  • You expect your job needs a large amount of disk space or processing power.
  • You have many jobs and would like them to run on several computers concurrently to speed up processing time.
  • You want to run the same program with many different datasets.
  • You want to run the same program but need to systematically change a parameter inside the program.
  • You need to use a particular application that is installed only on specific computers in the Stern GRID.

Sun Grid Engine (SGE)

The Sun Grid Engine (SGE) is a sophisticated scheduling system for distributing jobs across a heterogeneous group (a grid) of cooperating computers. Users of the Stern GRID need only know about the requirements of their job, and need not know about all of the different computers that might be able to process their job.
Users describe the processing they need to do by creating a small SGE submission script that tells the SGE software how to run the job and what the special characteristics of the job are. They then submit their request via the shell script to the SGE. The SGE finds the computer(s) that best satisfy the user’s requirements and runs the job.

Resource Distribution

This approach allows efficient use of all the resources available, as well as providing the ability to make sure that resources are used equitably (i.e. one user can’t take over all of the resources). This is the standard approach used in almost all high performance computing environments.

Array Task Capabilities

The SGE is particularly effective for tasks that can be divided into a number of smaller independent pieces. Each piece can run simultaneously on separate machines.

For example, suppose you want to run a simulation with 120,000 trials, and each trial will take 6 seconds of processing. In a normal environment, your simulation will take 720,000 seconds, or 12,000 minutes, or 200 hours of processing. This would take weeks to run. However, if you divide your simulation into 10 runs, each of which does 12,000 trials, you will have your output in 1/10th of the time, i.e. days instead of weeks. See “Array job” below for an example.

Another type of array task is a sequenced array job, where, the processing order of the runs can be specified. For example, say you have a post-processing run that uses as input the output of 10 other runs, i.e., all 10 runs must complete before the 11th post-processing run can start. The SGE can be configured so that you submit all 10+1 runs together, it knows to wait until the first 10 runs are finished before executing the 11th post-processing run. See [Sequenced Array Job] for an example.

Getting started

Logging In

To log in to the Stern GRID see Stern GRID Access. For Linux basics see Linux Basics.

Your Home Directory

Once logged in to the Stern GRID, you are by default in your home directory. Your home directory resides on a server that is separate to but accessible from any computer in the Stern GRID.

Using the Sun Grid Engine (SGE)

To use the SGE, you must create a “submission script” that contains the commands and programs that you want to execute. The script also contains options or instructions that control the behavior of the SGE.

Simple Job

Here is an example of a SGE submission script named hello.sge. This script contains several options and one command. Lines containing an option begin with #$. Lines beginning with only a # symbol are for comments. The command, echo “hello world”, is the linux command to output the text “hello world”. Try it. Don’t forget to change the email address.

# hello.sge
# This script prints "hello world" to the helloJob.o file
# Name this job "helloJob"
#$ -N helloJob
# Put error and output results in the current working directory
# i.e., the directory you ran this script from
#$ -cwd
# This is a very small program, let's
# set the time limit for this job to 5 minutes.
# If you do not specify h_cpu it defaults to 10 hours
# after which your job is automatically terminated.
#$ -l h_cpu=00:05:00
# Send an email at end of job, "e", and if aborted, "a".
#$ -m e
#$ -m a
# Specify where to send email about your job
#$ -M,

echo "hello world"

You submit your script (or job) to the SGE queuing system using the qsub command.

qsub hello.sge

After submission, you should see a message similar to:

Your job 55260 ("hello.sge") has been submitted

You can see the status of your job by typing the qstat command. If the job is running, qstat reports something similar to this:

job-ID prior   name     user  state submit/start at    queue  ja-task-ID
55397 0.35 helloJob torvalds  r  04/28/2010 13:49:38 1

Here you can see that the job called “helloJob” has a unique job ID of 55397, and the state of the job is r(unning).

Once the job has completed, two files are created:


where # is the job ID assigned by SGE. The e# file contains errors (if any) and the o# file contains your output (if any). If this example job ran successfully, the e# file should be empty and the o# file should contain the program output, namely the text “hello world”.
Return to top of page

Array Job

Array jobs are a set of related jobs. Every job in the collection is assigned a task ID which can be used in the job script to control the behavior of the job. An array job is very useful when you have a task that you want to repeat many times but with some part changing every run.
For jobs running SAS, SPLUS, and matlab, the task ID can be used inside the program to control the behavior of the program. See examples on this page for SAS, SPlus, and matlab.

Consider a program that calculates realized volatility over 10 years from the daily closing price series of an asset. Say we want to calculate the relative volatility for 5 different assets. The closing price data for each asset is contained within a separate file. Consider the following submission script called realVol.sge. The python program,, is submitted to the SGE just once but with 5 different data sets (series1-yahoo.txt, series2-corning.txt, series3-cocacola.txt, series4-campbell.txt, and series5-boeing.txt). Try it. Don’t forget to change the email address.

# realVol.sge
# This script runs a python program to
# calculate realized volatility on 5
# different asset price series
# SGE options
#$ -N realVolJob # Job name
#$ -cwd # Put results in current working dir
#$ -l h_cpu=20:00:00 # Limit time to 20 hours (defaults 10 hrs)
#$ -m ea # send email at end of job and if aborted
#$ -M # Specify where to send email
# Use as input the data file corresponding to the
# current task ID

echo series$SGE_TASK_ID*

./ < series$SGE_TASK_ID*

The python program:


Calculate Realized daily volatility for several assets


V = 100 * sqrt(252/n * sum(Rt^2))


V = Realized volatility
252 = constant approximating number of trading days per year
n = number of trading days in the data set
Rt = compounded daily return for day t caculated as

Rt = ln(Pt / P(t-1))


ln = natural logarithm
Pt = closing price of asset on day t
P(t-1) = closing price of asset on day t-1


import sys
import math

# ——————————–
# main
# ——————————–
def main():

  sumsq = 0

  # read first price in file
  price_previous = sys.stdin.readline()

  # read each subsequent price in file
  for price_current in sys.stdin:

    # calculate the daily return
    daily_return = math.log(float(price_current)/float(price_previous))

    # compute the sum of squares
    sumsq = sumsq + daily_return**2

    price_previous = price_current

  # compute and output realized volatility
  real_vol = 100*math.sqrt((252.0/n)*sumsq)
  print ("realized volatility = %.2f" % (real_vol))

if __name__ == '__main__':

Submit the job using qsub and the parameter -t to specify the range of task IDs.

qsub -t 1-5 realVol.sge

The SGE will run the job 5 times, setting the environment variable $SGE_TASK_ID to 1, 2, 3, 4, and 5. The SGE system schedules concurrently as many jobs as it can depending on the current availability of processors on the Stern GRID. Each job has its own o# and e# file in the form:


where # is the job ID and the $ is the task ID, i.e.,


The output of the program is the realized volatility for the corresponding price series. Looking at the contents of the file realVolJob.o452.1 we see

realized volatility = 39.74

Return to top of page

Sequenced Array Job

Running jobs sequentially rather than concurrently is very useful for multistage processing. For example say we run 10 jobs concurrently using the array method. Next, we want to post-process the output of the first 10 jobs into a single result. We can specify that the post-processing job not begin until the first 10 jobs have completed.

To have a job wait for another job, add the -hold_jid job1_ID option to your post-processing script.

As an example, say we have two scripts sequence1.sge and sequence2.sge. The first script, sequence1.sge runs 10 times. Each task takes a random length of time to complete. It then appends a random number to result.list. The second script, sequence2.sge runs after all of the sequence1.sge tasks have completed. It sorts the numbers in result.list in descending order and puts the sorted list into result.sorted. Try it.

# sequence1.sge
# this script sleeps a random number of seconds up to 30
# then outputs a time stamp message to the .o#.$ file
# and then outputs a random number in range 11-20
# to the result.list file
# SGE options
#$ -N stage1Job # Name this job stage1Job
#$ -cwd # Put results in current working dir
# Sleep for random time between 1 and 30 secs
# This simulates the time it takes to run a job
sleep $(($RANDOM % 30 + 1))

# Print the task ID and the time the task finished
# to compare to the start time of stage 2
echo "stage 1, task $SGE_TASK_ID finished at $(date +%r)"

# create a random number between 11-20
# to represent a pseudo result
echo $(($RANDOM % 10 + 11)) >> result.list

# sequence2.sge
# This script sorts the output from stage 1
# in result.list to result.sorted
# SGE options
#$ -N stage2Job # Name this job stage2Job
#$ -cwd # Put results in current working dir
# Ensures that stage2Job does not begin before stage1Job has finished.
#$ -hold_jid stage1Job
# Print the time that stage 2 began to compare to
# the time of the last job to finish from stage 1
echo "stage 2 began at $(date +%r)"

# sort output from stage 1; largest to smallest
sort -r result.list > result.sorted

Now, submit both jobs to the SGE.

qsub -t 1-10 sequence1.sge
qsub sequence2.sge

Typing qstat directly after submitting the jobs shows that they are q(ueued) and w(aiting) to be scheduled. Additionally stage2Job has a h(old) status.

job-ID prior   name     user  state submit/start at      queue  ja-task-ID
55398  0.00  stage1Job torvalds qw  04/28/2010 14:02:54       1-10:1
55399  0.00  stage2Job torvalds hqw 04/28/2010 14:03:02       1

After a few seconds, qstat shows most of the tasks running for stage1Job while stage2Job is still on hold.

job-ID prior   name     user  state submit/start at      queue  ja-task-ID
55398 0.35 stage1Job torvalds r 04/28/2010 14:03:08 1 1
55398 0.25 stage1Job torvalds r 04/28/2010 14:03:08 1 2
55398 0.21 stage1Job torvalds r 04/28/2010 14:03:08 1 3
55398 0.20 stage1Job torvalds r 04/28/2010 14:03:08 1 4
55398 0.19 stage1Job torvalds r 04/28/2010 14:03:08 1 5
55398 0.19 stage1Job torvalds r 04/28/2010 14:03:08 1 6
55399 0.00 stage2Job torvalds hqw 04/28/2010 14:03:02     1

A peek into the output files for each stage shows that stage 2 did not start until all tasks from stage 1 finished.

stage 1, task 1 finished at 02:03:27 PM
stage 1, task 10 finished at 02:03:17 PM
stage 1, task 2 finished at 02:03:21 PM
stage 1, task 3 finished at 02:03:26 PM
stage 1, task 4 finished at 02:03:36 PM
stage 1, task 5 finished at 02:03:30 PM
stage 1, task 6 finished at 02:03:14 PM
stage 1, task 7 finished at 02:03:09 PM
stage 1, task 8 finished at 02:03:11 PM
stage 1, task 9 finished at 02:03:14 PM

stage 2 began at 02:03:38 PM

Return to top of page

Matlab Job

In this example we use Matlab’s Financial Toolbox to compute the Black-Scholes put and call option pricing. This example also shows how matlab uses the SGE Task ID variable in the program to vary a parameter value. Try it.

# blkscholes.sge
# This script runs a matlab program saving the output into
# a file called blkscholes_#.out
# SGE options
#$ -N blkscholesJob # Name this job
#$ -cwd # Put results in current working dir
#$ -l h_cpu=00:05:00 # Limit time to 5 minutes (defaults 10 hrs)
#$ -l h_vmem=3200m # specify memory and stack size
#$ -l h_stack=512m
#$ -m ea # send email at end of job and if aborted
#$ -M # Specify where to send an email

# Run the program.

matlab712 < blkscholes.m > blkscholes_$SGE_TASK_ID.out

The Matlab program

% blkscholes.m
% Compute the call and put prices of a European stock
% on a non-dividend paying stock using Black_Scholes
% Suppose the stock price 3 months from the expiration of an option is $100,
% the exercise price is $95, the risk-free interest rate is 10% per annum,
% and we want to compute the call and put prices for volatilities
% specified at run time.
% Use the task id to vary the value of volatility
task_id = getenv('SGE_TASK_ID')
volat = str2num(task_id)

% task id’s must be specified as whole numbers; convert task
% id number to decimal
volat = volat/100;

% set the parameter values
price = 100.0;
strike = 95.0;
rate = 0.1;
time = 0.25;

% compute the call and put prices and write it to
% standard out; see the blkscholes_#.out file
[call, put] = blsprice(price, strike, rate, time, volat)

Submit the job using qsub and the parameter -t to specify the range of task IDs; increment them by 2, i.e., 30, 32, 34, etc.

qsub -t 30-40:2 blkscholes.sge

Ouput for SGE_TASK_ID = 30 which converts to volatility = 0.3

call = 10.1592
put = 2.8136

Return to top of page


In this example we use data about the right- or left-handedness observed in men and women and create a cross tabulation of gender versus handedness.Try it.

# crosstab.sge
# This script demonstrates how to run a
# SAS program using the sun grid engine
# SGE options
#$ -N crosstabJob # Name this job
#$ -cwd # Put results in current working dir
#$ -l Sas=1 # Request job run on machine with a SAS license
#$ -l h_cpu=00:10:00 # Limit job run time to 10 minutes
#$ -m ea # send email at end of job and if aborted
#$ -M # Specify where to send an email

# Run the program.
sas -nodms

The SAS program

This program creates a cross tabulation between gender and
left- and right-handedness.

DATA Hand;
  INPUT gender $ handed $ ;

Female    Right
Male      Left
Male      Right
Female    Right
Female    Right
Male      Right
Male      Left
Male      Right
Female    Right
Female    Left
Male      Right
Female    Right
  TABLES gender*handed;

Submit the job using qsub.

qsub crosstab.sge

Here is the cross tabulation output found in the file crosstab.lst.

Return to top of page

Stata Job

Here is the SGE submit script for running a simple Stata job. Try it.

# newVarEg.sge
# This script runs an example Stata program
# SGE options
#$ -N newVarEgJob # Name this job
#$ -cwd # Put results in current working dir
#$ -l stata12=1 # Request job run on machine with Stata
#$ -l h_cpu=00:10:00 # Limit time to 10 minutes
#$ -m ea # Send email at end of job and if aborted
#$ -M # Specify where to send an email

# Run the program in batch mode
# Capture the screen output in the .log file
stata-12 -b do > newVarEg.log

The data file, “testData.raw”

5 7 3
2 2 8
9 6 1

The data dictionary specifying format of fixed data file, “testData.dct”

dictionary using testData.raw {
_column(1) int v1 %2f
_column(3) int v2 %2f
_column(5) int v3 %2f

The Stata program.

/* */

/* Input data from fixed field file */
quietly infile using testData

/* create new variable */
generate v4=v3-v1

/* clear or save data environment */
/* save newVarEg.out */

Submit the job using qsub.

qsub newVarEg.sge

Return to top of page

R Job

In this example we fit a smooth spline model. Try it.

# fitspline.sge
# This script demonstrates how to run
# an R job, specifically, how to fit a
# smooth spline model.
# It uses a range of task IDs as a noise parameter
# for the model.
# SGE options
#$ -N fitsplineJob # Name this job
#$ -cwd # Put results in current dir
#$ -l h_cpu=00:10:00 # Limit job run time to 10 minutes
#$ -m ea # send email at end of job and if aborted
#$ -M # Specify where to send email

# Run the program

R CMD BATCH --slave fitspline.R fitspline$SGE_TASK_ID.Rout

The R program

# Fit a smooth spline model to randomly generated data.
# Use the task ID to vary the amount of error in the data.
# Create random data.
# Set the initial seed for the random number generator.
# Generate a random integer between 1 and 1000
set.seed( sample(1:1000, 1) )

# Create n random data points.
# Let n = 100 equally spaced values from 0 to 1.
n <- 100
x <- (1:n)/n

# the model in this simulation (no random error)
mval <- ((exp(x/3)-2*exp(-7*x)+sin(9*x))+1)/3

# generate n independent normal random variates with mean 0
# and variance from the task id
tid <- as.integer( Sys.getenv("SGE_TASK_ID") )
v <- tid/100
noise <- rnorm(n,0,v)

# y is simulated observed values (model value + noise)
y <- mval+noise

# or you can read data from a file:
# dat<-read.table("dataset.dat",header=T)
# attach(dat)

# fit smooth spline on data
# use GCV score and all basis functions
fit <- smooth.spline(x,y,cv=F,all.knots=T)

# create a postscript (PS) file for printing and viewing
r <- paste("result_", tid, ".ps", sep = "")
postscript(r, height=8, width=10, horizo=F)

#plot data point

#plot function with noise

#plot smooth spline fit

#output to PS file

# To view with Ghostscript, at the
# command line type: ghostscript
# where X is the correspoding task id

Submit the job using qsub and the parameter -t to specify the range of task IDs; i.e., 5, 10, 15, 20, 15

qsub -t 5-25:5 fitspline.sge

The output is a a set of graphs in postscript files. For example, when
the task ID = 10, the program generates random variates with an error variance of v = 10/100 = 0.1.
It then fits a smooth spline to this data.
To view the graph of this fitted model type


The graph should look similar to this.

Return to top of page

More SGE Topics

Monitoring a job with qstat

The qstat command shows the status of your job(s) as well as the available queues. Here are some options to qstat.

qstat display info about your current jobs
qstat -u ‘*’ display info about all users’ jobs
qstat -g c summary of queue usage
qstat -f full format summary of all queues
qstat -F resource availability for all queues
qstat -explain c ‘c’ displays the reason for the configuration ambiguous state
qstat -explain a ‘a’ shows the reason for the alarm state
qstat -explain A ‘A’ displays the suspend alarm state reasons
qstat -explain E ‘E’ displays the reason for an error state
Example 1

job-ID prior   name     user    state submit/start at    queue     ja-task-ID
56655 0.00 fitsplineJob torvalds qwE 08/27/2010 12:51:48             10-60:10

Here, qstat shows that the job named fitsplineJob with job ID 56655 has three state indicators – E(rror), q(ueued), and w(aiting). It also shows that the it is a multi-task job with task numbers incrementing by 10, i.e., 10, 20, .., 60.

Example 2

job-ID prior   name     user  state submit/start at    queue       ja-task-ID
55 0.85 fitsplineJob torvalds r 08/27/2010 12:52:00     10
55 0.75 fitsplineJob torvalds r 08/27/2010 12:52:00    20
55 0.71 fitsplineJob torvalds r 08/27/2010 12:52:00   30
55 0.70 fitsplineJob torvalds r 08/27/2010 12:52:00     40
55 0.69 fitsplineJob torvalds r 08/27/2010 12:52:00   50
55 0.68 fitsplineJob torvalds r 08/27/2010 12:52:00   60

Here we see fitsplineJob is now running the tasks 10, 20, …, 60 simultaneously on six different queues.

Job States
Queue states

Passing an environment variable to a job

You can pass user defined environment variables to a program by using the -v argument.

qsub -v MYVAR=”hiThere” -t 1-10 someProg.sge

Retrieve the variable in your program with getenv. See [Matlab job] and [R job] for examples.

Passing entire environment to a job

qsub -V -t 1-10 someProg.sge

Retrieve any and all environment variables defined at qsub time. See the Linux set command for a complete list.

Summary of Commands, Options and Resources

qsub Submit a SGE script
qstat Status of your job(s)
qstat -f Status of queues
qstat -explain c -j job_ID Why my job won’t run
qstat -u “*” Status of all users’ jobs
qdel job_ID Delete job job_ID
qhost Show the status of hosts, queues and jobs
qconf -sc Show all available resources (-l resource)
man SGE_command complete explanation of SGE_command
-hold_jid job_ID, … The job(s) in jobID, … will not execute unless the corresponding sub-jobs jobs referenced in the comma-separated job_ID list have completed
-m b
-m e
-m a
-m s
b – email is sent at beginning of the job
e – email is sent at the end of the job
a – email is sent if job is aborted
s – email is sent when job is suspended
-M username@host, … Defines the list of email addresses
-l resource, … Request the resource specified
-t min-max Run an array job from index min to index max
-N name Use name as the name for the job. This identifies the job in the queue and sets the name of the files generated by SGE.
-cwd makes SGE run the .sge file in the current directory. It also means that it will place the output files in the current directory rather than your home directory.
-e path/name.eXXXXX Put the error output file in directory path where XXXXX=$JOB_ID
-o path/name.oXXXXX Put the error output file in directory path where XXXXX=$JOB_ID
-q queue_name Send the job to queue queue_name
-pe parallel_environement_slots Request slots of the parallel environment. e.g. -pe mpich 2 would request 2 MPICH slots
-j y combine output and error files into one file
Resources (see “qconf -sc” for a complete list)
64bit=1 schedule job(s) on a 64 bit capable computer
h_cpu=HH:MM:SS hard time limit allowed for job to run
h_vmem=xM x MBs of memory
h_stack=xM x MBs of stack
matlab712=1 schedule job(s) on a computer with Matlab v7.12
Sas=1 schedule job(s) on a computer with SAS
stata12=1 schedule job(s) on a computer with stata v12


“Error for job xxxxx: can’t get password entry for user “torvalds”. Either the user does not exist or NIS error!” This error usually means that the job has been submitted to a node that has problems. Contact SCRC at Support.
“Warning: no access to tty (Bad file descriptor). Thus no job control in this shell.” This warning can be ignored.

Return to top of page