pyhdf.SD | index /usr/lib/python2.4/site-packages/pyhdf/SD.py |
A module of the pyhdf package implementing the SD (scientific
dataset) API of the NCSA HDF4 library.
(see: hdf.ncsa.uiuc.edu)
Author: Andre Gosselin
Maurice Lamontagne Institute
Andre.Gosselin@dfo-mpo.gc.ca
Maintainer: Enthough, Inc.
Austin, TX
enthought-dev@mail.enthought.com
Version: 0.8-1
Date: August 4 2008
Table of contents
-----------------
Introduction
SD module key features
Accessing the SD module
Package components
Prerequisites
Documentation
Summary of differences between the pyhdf and C SD API
Error handling
Attribute access: low and high level
Variable access: low and high level
Reading/setting multivalued HDF attributes and variables
netCDF files
Classes summary
Data types
Programming models
Examples
Module documentation
Introduction
------------
SD is one of the modules composing pyhdf, a python package implementing
the NCSA HDF library and letting one manage HDF files from within a python
program. Two versions of the HDF library currently exist, version 4 and
version 5. pyhdf only implements version 4 of the library. Many
different APIs are to be found inside the HDF4 specification.
Currently, pyhdf implements just a few of those: the SD, VS and V APIs.
Other APIs should be added in the future (GR, AN, etc).
The SD module implements the SD API of the HDF4 library, supporting what
are known as "scientific datasets". The HDF SD API has many similarities
with the netCDF API, another popular API for dealing with scientific
datasets. netCDF files can be in fact read and modified using the SD
module (but cannot be created from scratch).
SD module key features
----------------------
SD key features are as follows.
-Almost every routine of the original SD API has been implemented inside
pyhdf. Only a few have been ignored, most of them being of a rare use:
- SDsetnbitdataset()
- All chunking/tiling routines : SDgetchunkinfo(), SDreadchunk(),
SDsetchunk(), SDsetchunkcache(), SDwritechunk()
- SDsetblocksize()
- SDisdimval_bwcomp(), SDsetdimval_comp()
-It is quite straightforward to go from a C version to a python version
of a program accessing the SD API, and to learn SD usage by refering to
the C API documentation.
-A few high-level python methods have been developped to ease
programmers task. Of greatest interest are those allowing access
to SD datasets through familiar python idioms.
-Attributes can be read/written like ordinary python class
attributes.
-Datasets can be read/written like ordinary python lists using
multidimensional indices and so-called "extended slice syntax", with
strides allowed.
See "High level attribute access" and "High level variable access"
sections for details.
-SD offers methods to retrieve a dictionnary of the attributes,
dimensions and variables defined on a dataset, and of the attributes
set on a variable and a dimension. Querying a dataset is thus geatly
simplified.
-SD datasets are read/written through "numpy", a sophisticated
python package for efficiently handling multi-dimensional arrays of
numbers. numpy can nicely extend the SD functionnality, eg.
adding/subtracting arrays with the '+/-' operators.
Accessing the SD module
-----------------------
To access the SD API a python program can say one of:
>>> import pyhdf.SD # must prefix names with "pyhdf.SD."
>>> from pyhdf import SD # must prefix names with "SD."
>>> from pyhdf.SD import * # names need no prefix
This document assumes the last import style is used.
numpy will also need to be imported:
>>> from numpy import *
Package components
------------------
pyhdf is a proper Python package, eg a collection of modules stored under
a directory whose name is that of the package and which stores an
__init__.py file. Following the normal installation procedure, this
directory will be <python-lib>/site-packages/pyhdf', where <python-lib>
stands for the python installation directory.
For each HDF API exists a corresponding set of modules.
The following modules are related to the SD API.
_hdfext C extension module responsible for wrapping the HDF
C-library for all python modules
hdfext python module implementing some utility functions
complementing the _hdfext extension module
error defines the HDF4Error exception
SD python module wrapping the SD API routines inside
an OOP framework
_hdfext and hdfext were generated using the SWIG preprocessor.
SWIG is however *not* needed to run the package. Those two modules
are meant to do their work in the background, and should never be called
directly. Only 'pyhdf.SD' should be imported by the user program.
Prerequisites
-------------
The following software must be installed in order for pyhdf release 0.8 to
work.
HDF (v4) library, release 4.2r1
pyhdf does *not* include the HDF4 library, which must
be installed separately.
HDF is available at:
"http://hdf.ncsa.uiuc.edu/obtain.html".
HDF4.2r1 in turn relies on the following packages :
libjpeg (jpeg library) release 6b
libz (zlib library) release 1.1.4 or above
libsz (SZIP library) release 2.0; this package is optional
if pyhdf is installed with NOSZIP macro set
The SD module also needs:
numpy python package
SD variables are read/written using the array data type provided
by the python NumPy package. Note that since version 0.8 of
pyhdf, version 1.0.5 or above of NumPy is needed.
numpy is available at:
"http://www.numpy.org".
Documentation
-------------
pyhdf has been written so as to stick as closely as possible to
the naming conventions and calling sequences documented inside the
"HDF User s Guide" manual. Even if pyhdf gives an OOP twist
to the C API, the manual can be easily used as a documentary source
for pyhdf, once the class to which a function belongs has been
identified, and of course once requirements imposed by the Python
langage have been taken into account. Consequently, this documentation
will not attempt to provide an exhaustive coverage of the HDF SD
API. For this, the user is referred to the above manual.
The documentation of each pyhdf method will indicate the name
of the equivalent routine inside the C API.
This document (in both its text and html versions) has been completely
produced using "pydoc", the Python documentation generator (which
made its debut in the 2.1 Python release). pydoc can also be used
as an on-line help tool. For example, to know everything about
the SD.SDS class, say:
>>> from pydoc import help
>>> from pyhdf.SD import *
>>> help(SDS)
To be more specific and get help only for the get() method of the
SDS class:
>>> help(SDS.get) # or...
>>> help(vinst.get) # if vinst is an SDS instance
pydoc can also be called from the command line, as in:
% pydoc pyhdf.SD.SDS # doc for the whole SDS class
% pydoc pyhdf.SD.SDS.get # doc for the SDS.get method
Summary of differences between the pyhdf and C SD API
-----------------------------------------------------
Most of the differences between the pyhdf and C SD API can
be summarized as follows.
-In the C API, every function returns an integer status code, and values
computed by the function are returned through one or more pointers
passed as arguments.
-In pyhdf, error statuses are returned through the Python exception
mechanism, and values are returned as the method result. When the
C API specifies that multiple values are returned, pyhdf returns a
tuple of values, which are ordered similarly to the pointers in the
C function argument list.
Error handling
--------------
All errors that the C SD API reports with a SUCCESS/FAIL error code
are reported by pyhdf using the Python exception mechanism.
When the C library reports a FAIL status, pyhdf raises an HDF4Error
exception (a subclass of Exception) with a descriptive message.
Unfortunately, the C library is rarely informative about the cause of
the error. pyhdf does its best to try to document the error, but most
of the time cannot do more than saying "execution error".
Attribute access: low and high level
------------------------------------
In the SD API, attributes can be of many types (integer, float, string,
etc) and can be single or multi-valued. Attributes can be set either at
the dataset, the variable or the dimension level. This can can be achieved
in two ways.
-By calling the get()/set() method of an attribute instance. In the
following example, HDF file 'example.hdf' is created, and string
attribute 'title' is attached to the file and given value
'example'.
>>> from pyhdf.SD import *
>>> d = SD('example.hdf',SDC.WRITE|SDC.CREATE) # create file
>>> att = d.attr('title') # create attribute instance
>>> att.set(SDC.CHAR, 'example') # set attribute type and value
>>> print att.get() # get attribute value
>>>
-By handling the attribute like an ordinary Python class attribute.
The above example can then be rewritten as follows:
>>> from pyhdf.SD import *
>>> d = SD('example.hdf',SDC.WRITE|SDC.CREATE) # create dataset
>>> d.title = 'example' # set attribute type and value
>>> print d.title # get attribute value
>>>
What has been said above applies as well to multi-valued attributes.
>>> att = d.attr('values') # With an attribute instance
>>> att.set(SDC.INT32, (1,2,3,4,5)) # Assign 5 ints as attribute value
>>> att.get() # Get attribute values
[1, 2, 3, 4, 5]
>>> d.values = (1,2,3,4,5) # As a Python class attribute
>>> d.values # Get attribute values
[1, 2, 3, 4, 5]
When the attribute is known by its name , standard functions 'setattr()'
and 'getattr()' can be used to replace the dot notation.
Above example becomes:
>>> setattr(d, 'values', (1,2,3,4,5))
>>> getattr(d, 'values')
[1, 2, 3, 4, 5]
Handling a SD attribute like a Python class attribute is admittedly
more natural, and also much simpler. Some control is however lost in
doing so.
-Attribute type cannot be specified. pyhdf automatically selects one of
three types according to the value(s) assigned to the attribute:
SDC.CHAR if value is a string, SDC.INT32 if all values are integral,
SDC.DOUBLE if one value is a float.
-Consequently, byte values cannot be assigned.
-Attribute properties (length, type, index number) can only be queried
through methods of an attribute instance.
Variable access: low and high level
-----------------------------------
Similarly to attributes, datasets can be read/written in two ways.
The first way is through the get()/set() methods of a dataset instance.
Those methods accept parameters to specify the starting indices, the count
of values to read/write, and the strides along each dimension. For example,
if 'v' is a 4x4 array:
>>> v.get() # complete array
>>> v.get(start=(0,0),count=(1,4)) # first row
>>> v.get(start=(0,1),count=(2,2), # second and third columns of
... stride=(2,1)) # first and third row
The second way is by indexing and slicing the variable like a Python
sequence. pyhdf here follows most of the rules used to index and slice
numpy arrays. Thus an HDF dataset can be seen almost as a numpy
array, except that data is read from/written to a file instead of memory.
Extended indexing let you access variable elements with the familiar
[i,j,...] notation, with one index per dimension. For example, if 'm' is a
rank 3 dataset, one could write:
>>> m[0,3,5] = m[0,5,3]
When indexing is used to select a dimension in a 'get' operation, this
dimension is removed from the output array, thus reducing its rank by 1. A
rank 0 array is converted to a scalar. Thus, for a 3x3x3 'm' dataset
(rank 3) of integer type :
>>> a = m[0] # a is a 3x3 array (rank 2)
>>> a = m[0,0] # a is a 3 element array (rank 1)
>>> a = m[0,0,0] # a is an integer (rank 0 array becomes a scalar)
Had this rule not be followed, m[0,0,0] would have resulted in a single
element array, which could complicate computations.
Extended slice syntax allows slicing HDF datasets along each of its
dimensions, with the specification of optional strides to step through
dimensions at regular intervals. For each dimension, the slice syntax
is: "i:j[:stride]", the stride being optional. As with ordinary slices,
the starting and ending values of a slice can be omitted to refer to the
first and last element, respectively, and the end value can be negative to
indicate that the index is measured relative to the tail instead of the
beginning. Omitted dimensions are assumed to be sliced from beginning to
end. Thus:
>>> m[0] # treated as 'm[0,:,:]'.
Example above with get()/set() methods can thus be rewritten as follows:
>>> v[:] # complete array
>>> v[:1] # first row
>>> v[::2,1:3] # second and third columns of first and third row
Indexes and slices can be freely mixed, eg:
>>> m[:2,3,1:3:2]
Note that, countrary to indexing, a slice never reduces the rank of the
output array, even if its length is 1. For example, given a 3x3x3 'm'
dataset:
>>> a = m[0] # indexing: a is a 3x3 array (rank 2)
>>> a = m[0:1] # slicing: a is a 1x3x3 array (rank 3)
As can easily be seen, extended slice syntax is much more elegant and
compact, and offers a few possibilities not easy to achieve with the
get()/sett() methods. Negative indices offer a nice example:
>>> v[-2:] # last two rows
>>> v[-3:-1] # second and third row
>>> v[:,-1] # last column
Reading/setting multivalued HDF attributes and variables
--------------------------------------------------------
Multivalued HDF attributes are set using a python sequence (tuple or
list). Reading such an attribute returns a python list. The easiest way to
read/set an attribute is by handling it like a Python class attribute
(see "High level attribute access"). For example:
>>> d=SD('test.hdf',SDC.WRITE|SDC.CREATE) # create file
>>> d.integers = (1,2,3,4) # define multivalued integer attr
>>> d.integers # get the attribute value
[1, 2, 3, 4]
The easiest way to set multivalued HDF datasets is to assign to a
subset of the dataset, using "[:]" to assign to the whole dataset
(see "High level variable access"). The assigned value can be a python
sequence, which can be multi-leveled when assigning to a multdimensional
dataset. For example:
>>> d=SD('test.hdf',SDC.WRITE|SDC.CREATE) # create file
>>> v1=d.create('v1',SDC.INT32,3) # 3-elem vector
>>> v1[:]=[1,2,3] # assign 3-elem python list
>>> v2=d.create('d2',SDC.INT32,(3,3)) # create 3x3 variable
# The list assigned to v2 is composed
# of 3 lists, each representing a row of v2.
>>> v2[:]=[[1,2,3],[11,12,13],[21,22,23]]
The assigned value can also be a numpy array. Rewriting example above:
>>> v1=array([1,2,3])
>>> v2=array([[1,2,3],[11,12,13],[21,22,23])
Note how we use indexing expressions 'v1[:]' and 'v2[:]' when assigning
using python sequences, and just the variable names when assigning numpy
arrays.
Reading an HDF dataset always returns a numpy array, except if
indexing is used and produces a rank-0 array, in which case a scalar is
returned.
netCDF files
------------
Files written in the popular Unidata netCDF format can be read and updated
using the HDF SD API. However, pyhdf cannot create netCDF formatted
files from scratch. The python 'pycdf' package can be used for that.
When accessing netCDF files through pyhdf, one should be aware of the
following differences between the netCDF and the HDF SD libraries.
-Differences in terminology can be confusing. What netCDF calls a
'dataset' is called a 'file' or 'SD interface' in HDF. What HDF calls
a dataset is called a 'variable' in netCDF parlance.
-In the netCDF API, dimensions are defined at the global (netCDF dataset)
level. Thus, two netCDF variables defined over dimensions X and Y
necessarily have the same rank and shape.
-In the HDF SD API, dimensions are defined at the HDF dataset level,
except when they are named. Dimensions with the same name are considered
to be "shared" between all the file datasets. They must be of the same
length, and they share all their scales and attributes. For example,
setting an attribute on a shared dimension affects all datasets sharing
that dimension.
-When two or more netCDF variables are based on the unlimited dimension,
they automatically grow in sync. If variables A and B use the unlimited
dimension, adding "records" to A along its unlimited dimension
implicitly adds records in B (which are left in an undefined state and
filled with the fill_value when the file is refreshed).
-In HDF, unlimited dimensions behave independently. If HDF datasets A and
B are based on an unlimited dimension, adding records to A does not
affect the number of records to B. This is true even if the unlimited
dimensions bear the same name (they do not appear to be "shared" as is
the case when the dimensions are fixed).
Classes summary
---------------
pyhdf wraps the SD API using different types of python classes:
SD HDF SD interface (almost synonymous with the subset of the
HDF file holding all the SD datasets)
SDS scientific dataset
SDim dataset dimension
SDAttr attribute (either at the file, dataset or dimension level)
SDC constants (opening modes, data types, etc)
In more detail:
SD The SD class implements the HDF SD interface as applied to a given
file. This class encapsulates the "SD interface" identifier
(referred to as "sd_id" in the C API documentation), and all
the SD API top-level functions.
To create an SD instance, call the SD() constructor.
methods:
constructors:
SD() open an existing HDF file or create a new one,
returning an SD instance
attr() create an SDAttr (attribute) instance to access
an existing file attribute or create a new one;
"dot notation" can also be used to get and set
an attribute
create() create a new dataset, returning an SDS instance
select() locate an existing dataset given its name or
index number, returning an SDS instance
file closing
end() end access to the SD interface and close the
HDF file
inquiry
attributes() return a dictionnary describing every global
attribute attached to the HDF file
datasets() return a dictionnary describing every dataset
stored inside the file
info() get the number of datasets stored in the file
and the number of attributes attached to it
nametoindex() get a dataset index number given the dataset
name
reftoindex() get a dataset index number given the dataset
reference number
misc
setfillmode() set the fill mode for all the datasets in
the file
SDAttr The SDAttr class defines an attribute, either at the file (SD),
dataset (SDS) or dimension (SDim) level. The class encapsulates
the object to which the attribute is attached, and the attribute
name.
To create an SDAttr instance, obtain an instance for an SD (file),
SDS (dataset) or dimension (SDim) object, and call its attr()
method.
NOTE. An attribute can also be read/written like
a python class attribute, using the familiar
dot notation. See "High level attribute access".
methods:
read/write value
get() get the attribute value
set() set the attribute value
inquiry
index() get the attribute index number
info() get the attribute name, type and number of
values
SDC The SDC class holds contants defining file opening modes and
data types. Constants are named after their C API counterparts.
file opening modes:
SDC.CREATE create file if non existent
SDC.READ read-only mode
SDC.TRUNC truncate file if already exists
SDC.WRITE read-write mode
data types:
SDC.CHAR 8-bit character
SDC.CHAR8 8-bit character
SDC.UCHAR unsigned 8-bit integer
SDC.UCHAR8 unsigned 8-bit integer
SDC.INT8 signed 8-bit integer
SDC.UINT8 unsigned 8-bit integer
SDC.INT16 signed 16-bit integer
SDC.UINT16 unsigned 16-bit intege
SDC.INT32 signed 32-bit integer
SDC.UINT32 unsigned 32-bit integer
SDC.FLOAT32 32-bit floating point
SDC.FLOAT64 64-bit floaring point
dataset fill mode:
SDC.FILL
SDC.NOFILL
dimension:
SDC.UNLIMITED dimension can grow dynamically
data compression:
SDC.COMP_NONE
SDC.COMP_RLE
SDC.COMP_NBIT
SDC.COMP_SKPHUFF
SDC.COMP_DEFLATE
SDC.COMP_SZIP
SDC.COMP_SZIP_EC
SDC.COMP_SZIP_NN
SDC.COMP_SZIP_RAW
SDS The SDS class implements an HDF scientific dataset (SDS) object.
To create an SDS instance, call the create() or select() methods
of an SD instance.
methods:
constructors
attr() create an SDAttr (attribute) instance to access
an existing dataset attribute or create a
new one; "dot notation" can also be used to get
and set an attribute
dim() return an SDim (dimension) instance for a given
dataset dimension, given the dimension index
number
dataset closing
endaccess() terminate access to the dataset
inquiry
attributes() return a dictionnary describing every
attribute defined on the dataset
checkempty() determine whether the dataset is empty
dimensions() return a dictionnary describing all the
dataset dimensions
info() get the dataset name, rank, dimension lengths,
data type and number of attributes
iscoordvar() determine whether the dataset is a coordinate
variable (holds a dimension scale)
isrecord() determine whether the dataset is appendable
(the dataset dimension 0 is unlimited)
ref() get the dataset reference number
reading/writing data values
get() read data from the dataset
set() write data to the dataset
A dataset can also be read/written using the
familiar index and slice notation used to
access python sequences. See "High level
variable access".
reading/writing standard attributes
getcal() get the dataset calibration coefficients:
scale_factor, scale_factor_err, add_offset,
add_offset_err, calibrated_nt
getdatastrs() get the dataset standard string attributes:
long_name, units, format, coordsys
getfillvalue() get the dataset fill value:
_FillValue
getrange() get the dataset min and max values:
valid_range
setcal() set the dataset calibration coefficients
setdatastrs() set the dataset standard string attributes
setfillvalue() set the dataset fill value
setrange() set the dataset min and max values
compression
getcompress() get info about the dataset compression type and mode
setcompress() set the dataset compression type and mode
misc
setexternalfile() store the dataset in an external file
SDim The SDdim class implements a dimension object.
To create an SDim instance, call the dim() method of an SDS
(dataset) instance.
Methods:
constructors
attr() create an SDAttr (attribute) instance to access
an existing dimension attribute or create a
new one; "dot notation" can also be used to
get and set an attribute
inquiry
attributes() return a dictionnary describing every
attribute defined on the dimension
info() get the dimension name, length, scale data type
and number of attributes
length() return the current dimension length
reading/writing dimension data
getscale() get the dimension scale values
setname() set the dimension name
setscale() set the dimension scale values
reading/writing standard attributes
getstrs() get the dimension standard string attributes:
long_name, units, format
setstrs() set the dimension standard string attributes
Data types
----------
Data types come into play when first defining datasets and their attributes,
and later when querying the definition of those objects.
Data types are specified using the symbolic constants defined inside the
SDC class of the SD module.
- CHAR and CHAR8 (equivalent): an 8-bit character.
- UCHAR, UCHAR8 and UINT8 (equivalent): unsigned 8-bit values (0 to 255)
- INT8: signed 8-bit values (-128 to 127)
- INT16: signed 16-bit values
- UINT16: unsigned 16 bit values
- INT32: signed 32 bit values
- UINT32: unsigned 32 bit values
- FLOAT32: 32 bit floating point values (C floats)
- FLOAT64: 64 bit floating point values (C doubles)
There is no explicit "string" type. To simulate a string, set the
type to CHAR, and set the length to a value of 'n' > 1. This creates and
"array of characters", close to a string (except that strings will always
be of length 'n', right-padded with spaces if necessary).
Programming models
------------------
Writing
-------
The following code can be used as a model to create an SD dataset.
It shows how to use the most important functionnalities
of the SD interface needed to initialize a dataset.
A real program should of course add error handling.
# Import SD and numpy.
from pyhdf.SD import *
from numpy import *
fileName = 'template.hdf'
# Create HDF file.
hdfFile = SD(fileName ,SDC.WRITE|SDC.CREATE)
# Assign a few attributes at the file level
hdfFile.author = 'It is me...'
hdfFile.priority = 2
# Create a dataset named 'd1' to hold a 3x3 float array.
d1 = hdfFile.create('d1', SDC.FLOAT32, (3,3))
# Set some attributs on 'd1'
d1.description = 'Sample 3x3 float array'
d1.units = 'celsius'
# Name 'd1' dimensions and assign them attributes.
dim1 = d1.dim(0)
dim2 = d1.dim(1)
dim1.setname('width')
dim2.setname('height')
dim1.units = 'm'
dim2.units = 'cm'
# Assign values to 'd1'
d1[0] = (14.5, 12.8, 13.0) # row 1
d1[1:] = ((-1.3, 0.5, 4.8), # row 2 and
(3.1, 0.0, 13.8)) # row 3
# Close dataset
d1.endaccess()
# Close file
hdfFile.end()
Reading
-------
The following code, which reads the dataset created above, can also serve as
a model for any program which needs to access an SD dataset.
# Import SD and numpy.
from pyhdf.SD import *
from numpy import *
fileName = 'template.hdf'
# Open file in read-only mode (default)
hdfFile = SD(fileName)
# Display attributes.
print "file:", fileName
print "author:", hdfFile.author
print "priority:", hdfFile.priority
# Open dataset 'd1'
d1 = hdfFile.select('d1')
# Display dataset attributes.
print "dataset:", 'd1'
print "description:",d1.description
print "units:", d1.units
# Display dimensions info.
dim1 = d1.dim(0)
dim2 = d1.dim(1)
print "dimensions:"
print "dim1: name=", dim1.info()[0],
print "length=", dim1.length(),
print "units=", dim1.units
print "dim2: name=", dim2.info()[0],
print "length=", dim2.length(),
print "units=", dim2.units
# Show dataset values
print d1[:]
# Close dataset
d1.endaccess()
# Close file
hdfFile.end()
Examples
--------
Example-1
---------
The following simple example exercises some important pyhdf.SD methods. It
shows how to create an HDF dataset, define attributes and dimensions,
create variables, and assign their contents.
Suppose we have a series of text files each defining a 2-dimensional real-
valued matrix. First line holds the matrix dimensions, and following lines
hold matrix values, one row per line. The following procedure will load
into an HDF dataset the contents of any one of those text files. The
procedure computes the matrix min and max values, storing them as
dataset attributes. It also assigns to the variable the group of
attributes passed as a dictionnary by the calling program. Note how simple
such an assignment becomes with pyhdf: the dictionnary can contain any
number of attributes, of different types, single or multi-valued. Doing
the same in a conventional language would be a much more challenging task.
Error checking is minimal, to keep example as simple as possible
(admittedly a rather poor excuse ...).
from numpy import *
from pyhdf.SD import *
import os
def txtToHDF(txtFile, hdfFile, varName, attr):
try: # Catch pyhdf errors
# Open HDF file in update mode, creating it if non existent.
d = SD(hdfFile, SDC.WRITE|SDC.CREATE)
# Open text file and get matrix dimensions on first line.
txt = open(txtFile)
ni, nj = map(int, txt.readline().split())
# Define an HDF dataset of 32-bit floating type (SDC.FLOAT32)
# with those dimensions.
v = d.create(varName, SDC.FLOAT32, (ni, nj))
# Assign attributes passed as argument inside dict 'attr'.
for attrName in attr.keys():
setattr(v, attrName, attr[attrName])
# Load variable with lines of data. Compute min and max
# over the whole matrix.
i = 0
while i < ni:
elems = map(float, txt.readline().split())
v[i] = elems # load row i
minE = min(elems)
maxE = max(elems)
if i:
minVal = min(minVal, minE)
maxVal = max(maxVal, maxE)
else:
minVal = minE
maxVal = maxE
i += 1
# Set variable min and max attributes.
v.minVal = minVal
v.maxVal = maxVal
# Close dataset and file objects (not really necessary, since
# closing is automatic when objects go out of scope.
v.endaccess()
d.end()
txt.close()
except HDF4Error, msg:
print "HDF4Error:", msg
We could now call the procedure as follows:
hdfFile = 'table.hdf'
try: # Delete if exists.
os.remove(hdfFile)
except:
pass
# Load contents of file 'temp.txt' into dataset 'temperature'
# an assign the attributes 'title', 'units' and 'valid_range'.
txtToHDF('temp.txt', hdfFile, 'temperature',
{'title' : 'temperature matrix',
'units' : 'celsius',
'valid_range': (-2.8,27.0)})
# Load contents of file 'depth.txt' into dataset 'depth'
# and assign the same attributes as above.
txtToHDF('depth.txt', hdfFile, 'depth',
{'title' : 'depth matrix',
'units' : 'meters',
'valid_range': (0, 500.0)})
Example 2
---------
This example shows a usefull python program that will display the
structure of the SD component of any HDF file whose name is given on
the command line. After the HDF file is opened, high level inquiry methods
are called to obtain dictionnaries descrybing attributes, dimensions and
datasets. The rest of the program mostly consists in nicely formatting
the contents of those dictionaries.
import sys
from pyhdf.SD import *
from numpy import *
# Dictionnary used to convert from a numeric data type to its symbolic
# representation
typeTab = {
SDC.CHAR: 'CHAR',
SDC.CHAR8: 'CHAR8',
SDC.UCHAR8: 'UCHAR8',
SDC.INT8: 'INT8',
SDC.UINT8: 'UINT8',
SDC.INT16: 'INT16',
SDC.UINT16: 'UINT16',
SDC.INT32: 'INT32',
SDC.UINT32: 'UINT32',
SDC.FLOAT32: 'FLOAT32',
SDC.FLOAT64: 'FLOAT64'
}
printf = sys.stdout.write
def eol(n=1):
printf("%s" % chr(10) * n)
hdfFile = sys.argv[1] # Get first command line argument
try: # Catch pyhdf.SD errors
# Open HDF file named on the command line
f = SD(hdfFile)
# Get global attribute dictionnary
attr = f.attributes(full=1)
# Get dataset dictionnary
dsets = f.datasets()
# File name, number of attributes and number of variables.
printf("FILE INFO"); eol()
printf("-------------"); eol()
printf("%-25s%s" % ("File:", hdfFile)); eol()
printf("%-25s%d" % (" file attributes:", len(attr))); eol()
printf("%-25s%d" % (" datasets:", len(dsets))); eol()
eol();
# Global attribute table.
if len(attr) > 0:
printf("File attributes"); eol(2)
printf(" name idx type len value"); eol()
printf(" -------------------- --- ------- --- -----"); eol()
# Get list of attribute names and sort them lexically
attNames = attr.keys()
attNames.sort()
for name in attNames:
t = attr[name]
# t[0] is the attribute value
# t[1] is the attribute index number
# t[2] is the attribute type
# t[3] is the attribute length
printf(" %-20s %3d %-7s %3d %s" %
(name, t[1], typeTab[t[2]], t[3], t[0])); eol()
eol()
# Dataset table
if len(dsets) > 0:
printf("Datasets (idx:index num, na:n attributes, cv:coord var)"); eol(2)
printf(" name idx type na cv dimension(s)"); eol()
printf(" -------------------- --- ------- -- -- ------------"); eol()
# Get list of dataset names and sort them lexically
dsNames = dsets.keys()
dsNames.sort()
for name in dsNames:
# Get dataset instance
ds = f.select(name)
# Retrieve the dictionary of dataset attributes so as
# to display their number
vAttr = ds.attributes()
t = dsets[name]
# t[0] is a tuple of dimension names
# t[1] is a tuple of dimension lengths
# t[2] is the dataset type
# t[3] is the dataset index number
printf(" %-20s %3d %-7s %2d %-2s " %
(name, t[3], typeTab[t[2]], len(vAttr),
ds.iscoordvar() and 'X' or ''))
# Display dimension info.
n = 0
for d in t[0]:
printf("%s%s(%d)" % (n > 0 and ', ' or '', d, t[1][n]))
n += 1
eol()
eol()
# Dataset info.
if len(dsNames) > 0:
printf("DATASET INFO"); eol()
printf("-------------"); eol(2)
for name in dsNames:
# Access the dataset
dsObj = f.select(name)
# Get dataset attribute dictionnary
dsAttr = dsObj.attributes(full=1)
if len(dsAttr) > 0:
printf("%s attributes" % name); eol(2)
printf(" name idx type len value"); eol()
printf(" -------------------- --- ------- --- -----"); eol()
# Get the list of attribute names and sort them alphabetically.
attNames = dsAttr.keys()
attNames.sort()
for nm in attNames:
t = dsAttr[nm]
# t[0] is the attribute value
# t[1] is the attribute index number
# t[2] is the attribute type
# t[3] is the attribute length
printf(" %-20s %3d %-7s %3d %s" %
(nm, t[1], typeTab[t[2]], t[3], t[0])); eol()
eol()
# Get dataset dimension dictionnary
dsDim = dsObj.dimensions(full=1)
if len(dsDim) > 0:
printf ("%s dimensions" % name); eol(2)
printf(" name idx len unl type natt");eol()
printf(" -------------------- --- ----- --- ------- ----");eol()
# Get the list of dimension names and sort them alphabetically.
dimNames = dsDim.keys()
dimNames.sort()
for nm in dimNames:
t = dsDim[nm]
# t[0] is the dimension length
# t[1] is the dimension index number
# t[2] is 1 if the dimension is unlimited, 0 if not
# t[3] is the the dimension scale type, 0 if no scale
# t[4] is the number of attributes
printf(" %-20s %3d %5d %s %-7s %4d" %
(nm, t[1], t[0], t[2] and "X" or " ",
t[3] and typeTab[t[3]] or "", t[4])); eol()
eol()
except HDF4Error, msg:
print "HDF4Error", msg
Modules | ||||||
|
Classes | ||||||||||||||||||||||||||||||||||||||||||||||
|
Data | ||
__all__ = ['SD', 'SDAttr', 'SDC', 'SDS', 'SDim', 'HDF4Error'] |