|CFD High-Performance Computing Environment|
|1. Parallel Computing Explained 2. Introduction to Parallel Programming using MPI 3. Intermediate MPI + MPI-2 4. Multi-level Parallel Programming 5. Applications in CFD Each module, where applicable, begins with a list of objectives, includes an introduction that describes the best suited applications or algorithms, and denotes typical uses, detailed descriptions of routines, lots of examples, and ends with a test that relates back to the objectives.|
|Date:||May 12, 2004 - May 13, 2004|
|Location:||Carleton University, Ottawa, Ontario, Canada|
|Organizer:||CFD Society of Canada|
|Special Fields:||Scientific Computing|
|Deadlines:||March 22, 2004 (registration)|
|Type of Event:||Course, Local|
CFD High-Performance Computing Environment
Instructor: Dr. Abdelkader Baggag Email: firstname.lastname@example.org Phone: (514) 398-3293 Fax: (514) 398-2203 Course Schedule: May 12-13, 2004 Course Location: Carleton University Registration: www.cfd2004.org/registration.html Course Fee: $250CAD/$193US before March 22 and $300CAD $231US after March 22.(Transportation and 2 lunches included in the overall cost)
Prerequisites: No prior experience with MPI or parallel programming is required.
Dr. Abdelkader Baggag completed his doctoral degree in Computer Science with a Minor in Scientific Computation from the university of Minnesota / Purdue university, receiving renown and praise from his faculty advisors and peers as well as those in the research community with whom he worked at NASA Langley Research Center in Hampton, Virginia.
Dr. Abdelkader Baggag is an expert in High-Performance Computing: He has tremendous Masters, Ph.D. and Research-work experience in large-scale applications on parallel computers. He has worked with the best in Montreal, Minnesota, Purdue and NASA Langley Research Center at the Institute for Computer Applications in Science and Engineering (ICASE).
He intimately knows CFD, and has studied under the best in Minnesota and Purdue: Iterative Solvers, Discontinuous Galerkin, Finite Volumes, Finite Elements.
He intimately knows parallel computing and has done it successfully on a variety of scientific applications, from aero-acoustics to particulate flows.
He has prepared and taught university level courses at the undergraduate and graduate level at Hampton university in Virginia.
At McGill university, he is associated with the department of Mechanical Engineering as a Professor, teaches "High-Performance Computing" courses, and does research in "Parallel Robust Preconditioners for Large-Scale problems" in collaboration with the McGill's CFD laboratory. He also provides "Parallel Numerical Algorithms" for the researchers at large at McGill university, since he is also "Senior HPC Analyst" at the CLUMEQ supercomputer center, which is in the top500 fastest machines in the world.
Large-Scale Applications on Parallel Computers Computational Fluid Dynamics (CFD) Iterative Solvers Discontinuous Galerkin Finite Volumes Finite Elements Aero-Acoustics Parallel Numerical Simulation of Particulate Flows Robust PreconditionersCourse Modules and Description:
Parallel Computing Explained
This module is an introduction to parallel computing. It provides a resource that is useful to both beginners and more experienced users.
Parallel Computing Overview SGI Origin3800 Overview Cluster Computing Overview Porting Issues Timing and Profiling Scalar and Cache Tuning How to Parallelize a Code Parallel Performance Analysis Parallel Code TuningIntroduction to Parallel Programming using MPI
This module is an introduction to parallel programming through the Message Passing Interface (MPI), a standard library of subroutines, or function calls that can be used to implement a message-passing program.
Message-passing fundamentals Getting started with MPI MPI program structure Point-to-Point communication Communication modes Derived datatypes Collective communications Communicators Virtual topologiesIntermediate MPI + MPI-2
This module covers "intermediate" level topics in MPI, and is both useful and relevant to expand MPI knowledge acquired in the previous module.
Message Probing Derived Datatypes User-defined Reduction Operations Inter-communicators Parallel Input/Output Graph TopologiesMulti-level Parallel Programming
This module describes a hybrid-type programming using both OpenMP on computational nodes, and MPI across nodes.
Distributed-Shared Memory machines OpenMP Programming Review Laplace Solver using OpenMP Laplace Solver using MPI Combining OpenMP and MPI Programming Laplace Solver using OpenMP and MPI Multi-level Parallelism and Performance Common Multi-Level Parallelism ErrorsApplications in CFD:
Domain Decomposition Techniques Parallel Implementation of Discontinuous Galerkin Method for Aeroacoustics (Explicit) Parallel Iterative Solvers and Preconditioners (Implicit) Parallel mathematical librariesEach module, where applicable, begins with a list of objectives, includes an introduction that describes the best suited applications or algorithms, and denotes typical uses, detailed descriptions of routines, lots of examples, and ends with a test that relates back to the objectives.
|Event record first posted on December 23, 2003, last modified on January 7, 2004|