Skip to main content

High Performance Computing: Download and prepare data in a batch mode

Over the time, I need to manipulate a lot of data on a Linux cluster. Some of these manipulations actually read/write data, whereas some are essentially file system operations, such as downloading the files.
Here I present a list of similar operations suitable for HPC using pbs job approach whenever possible.
I do not attempt to include all possible methods but only the ones that I find useful and easy to prepare in seconds.
The most efficient way to download MODIS alike data using HPC.
wget -r --no-parent -R "index.html*" --retr-symlinks -A "*.nc" ftp-url
wget -r --no-parent -R "index.html*" -A "MOD17A2.A2000*.hdf" -A "MOD17A2.A2000*.xml" http-url
wget -r --no-parent -R "index.html*" -A "MOD17*.hdf" -A "MOD17*.xml" http-url
You can basically setup filter for file type, year and granule id.
A live example:
#PBS -l nodes=1:ppn=1                     
#PBS -l naccesspolicy=singleuser       
#PBS -l walltime=40:00:00                   
#PBS -M your email address
#PBS -m ae             
#PBS -N download                         
#PBS -q standby                           
wget -r --no-parent  -R "index.html*"   --retr-symlinks  -A "*.tar" ftp://somwhere

Compress and extract 

#use this script to extract tar files under the sub directory     
for dir in `find -mindepth 1 -maxdepth 1 -type d`
    cd $dir
    echo $dir
    tar xf *.tar ./
    cd ..
# Pass the name of the file to unpack on the command line $1
for file in *.gz
    gunzip -d "$file"


grep -rnw '/path/to/somewhere/' -e "pattern"
find . -maxdepth 1 -name "*string*" -print

make &> results.txt


find . -name '*.cpp' | xargs wc -l


qsub -I -lnodes=1:ppn=20 -lwalltime=04:00:00 -q boss  -X

Simply organize these above bash script and replace with commands, most file system related tasks can be resolved. I will add more related scripts later.


Popular posts from this blog

Spatial datasets operations: mask raster using region of interest

Climate change related studies usually involve spatial datasets extraction from a larger domain.
In this article, I will briefly discuss some potential issues and solutions.

In the most common scenario, we need to extract a raster file using a polygon based shapefile. And I will focus as an example.

In a typical desktop application such as ArcMap or ENVI, this is usually done with a tool called clip or extract using mask or ROI.

Before any analysis can be done, it is the best practice to project all datasets into the same projection.

If you are lucky enough, you may find that the polygon you will use actually matches up with the raster grid perfectly. But it rarely happens unless you created the shapefile using "fishnet" or other approaches.

What if luck is not with you? The algorithm within these tool usually will make the best estimate of the value based on the location. The nearest re-sample, but not limited to, will be used to calculate the value. But what about the outp…

Numerical simulation: ode/pde solver and spin-up

For Earth Science model development, I inevitably have to deal with ODE and PDE equations. I also have come across some discussion related to this topic, i.e.,

In an attempt to answer this question, as well as redefine the problem I am dealing with, I decided to organize some materials to illustrate our current state on this topic.

Models are essentially equations. In Earth Science, these equations are usually ODE or PDE. So I want to discuss this from a mathematical perspective.

Ideally, we want to solve these ODE/PDE with initial condition (IC) and boundary condition (BC) using various numerical methods.

Because of the nature of geology, everything is similar to its neighbors. So we can construct a system of equations which may have multiple equation for each single grid cell. Now we have an array of equation…

Watershed Delineation On A Hexagonal Mesh Grid: Part A

One of our recent publications is "Watershed Delineation On A Hexagonal Mesh Grid" published on Environmental Modeling and Software (link).
Here I want to provide some behind the scene details of this study.

(The figures are high resolution, you might need to zoom in to view.)

First, I'd like to introduce the motivation of this work. Many of us including me have done lots of watershed/catchment hydrology modeling. For example, one of my recent publications is a three-dimensional carbon-water cycle modeling work (link), which uses lots of watershed hydrology algorithms.
In principle, watershed hydrology should be applied to large spatial domain, even global scale. But why no one is doing it?  I will use the popular USDA SWAT model as an example. Why no one is setting up a SWAT model globally? 
There are several reasons we cannot use SWAT at global scale: We cannot produce a global DEM with a desired map projection. SWAT model relies on stream network, which depends on DEM.…