
TheFinite Element Machine (FEM) was a late 1970s-early 1980sNASA project to build and evaluate the performance of aparallel computer forstructural analysis. The FEM was completed and successfully tested at theNASA Langley Research Center inHampton, Virginia.[1] The motivation for FEM arose from the merger of two concepts: thefinite element method ofstructural analysis and the introduction of relatively low-costmicroprocessors.
In the finite element method, the behavior (stresses, strains and displacements resulting from load conditions) of large-scale structures isapproximated by a FE model consisting of structural elements (members) connected at structural node points. Calculations on traditional computers are performed at each node point and results communicated to adjacent node points until the behavior of the entire structure is computed. On the Finite Element Machine, microprocessors located at each node point perform these nodal computations in parallel. If there are more node points (N) than microprocessors (P), then each microprocessor performs N/P computations. The Finite Element Machine contained 32 processor boards each with aTexas Instruments TMS9900 processor, 32Input/Output (IO) boards and a TMS99/4 controller. The FEM was conceived, designed and fabricated at NASA Langley Research Center. The TI 9900 processor chip was selected by the NASA team as it was thefirst 16-bit processor available on the market which until then was limited to less powerful8-bit processors. The FEM concept was first successfully tested to solve beam bending equations on a Langley FEMprototype (4 IMSAI 8080s). This led to full-scale FEM fabrication & testing by the FEM hardware-software-applications team led by Dr.Olaf Storaasli formerly ofNASA Langley Research Center andOak Ridge National Laboratory (currently atUSEC).The first significant Finite Element Machine results are documented in: The Finite Element Machine: An experiment in parallel processing (NASA TM 84514).[1]
Based on the Finite Element Machine's success in demonstrating Parallel Computing viability, (alongsideILLIAC IV andGoodyear MPP), commercial parallel computers soon were sold. NASA Langley subsequently purchased a Flex/32 Multicomputer (and laterIntel iPSC andIntel Paragon) to continue parallel finite elementalgorithmR&D. In 1989, the parallel equation solver code, first prototyped on FEM, and tested on FLEX was ported to NASA's first Cray YMP via Force[2] (Fortran for Concurrent Execution) to reduce the structural analysis computation time for the space shuttle Challenger Solid Rocket Booster resdesign with 54,870 equations from 14 hours to 6 seconds. This research accomplishment was awarded the first Cray GigaFLOP Performance Award at Supercomputing '89. This code evolved into NASA's General-Purpose Solver (GPS) for Matrix Equations used in numerous finite element codes to speed solution time. GPS sped upAlphaStar Corporation's Genoa code 10X, allowing 10X larger applications for which the team received NASA's 1999 Software of the Year Award and a 2000 R&D100 Award.