Masters Theses
Date of Award
3-1983
Degree Type
Thesis
Degree Name
Master of Science
Major
Computer Science
Major Professor
Robert T. Gregory
Committee Members
Michael Thomason, Terry Feagin
Abstract
We consider a means of error-free computation that overcomes well-known difficulties in using fixed word-length digital computers for solving ill-conditioned problems and for using numerically unstable algorithms. The technique uses extended-precision single-modulus residue arithmetic to perform computations in an environment totally free of machine roundoff error.
We perform computations using the set of order-N Farey fractions
FN = { a/b: 0 ≤ a ≤ N, 0 ≤ |b| ≤ N, and gcd(a,b) = 1 } ,
which we map onto integer residues in the finite field
Ip [0, 1, 2, ... , p-1] ,
where p is a prime such that
p ≤ 2N2 + 1 .
We replace arithmetic operations on rational numbers in Fn by corresponding operations on integers in Ip. The mapping between Fn and Ip is one-to-one and onto, and consequently an integer result in Ip can be mapped onto its rational equivalent in Fn. If we choose N (and p) large enough, we can perform computations on rational numbers with very large numerators and denominators by performing the corresponding computations using integers in Ip and be guaranteed that the result in Fn is exact.
We discuss algorithms and coding of routines for extended-precision two's complement integer arithmetic and illustrate their use by means of specific examples. We then describe software routines to implement two equivalent forms of exact computation, (1) extended-precision single-modulus residue arithmetic using Mersenne prime moduli and (2) rational arithmetic in which numerator and denominator are stored as separate integers. The routines allow us to retrieve results of as many as 1458 decimal numerator or denominator digits in a residue arithmetic computation (twice that number, 2918, in a rational arithmetic computation).
We apply extended-precision residue and rational arithmetic to four numerical algorithms: (a) Gaussian elimination--back substitution solution to simultaneous linear equations, (b) the Newton-Raphson method for obtaining roots of an equation, (c) the Milne-Simpson method for solving first order linear differential equations, and (d) Maclaurin series approximations. We present the exact results of specific computations and compare and contrast memory and execution time requirements using residue and rational arithmetic.
The major results of the study are twofold. First, exact computation produces enormous improvement in numerical accuracy over hardware integer and floating-point arithmetic, albeit vastly increasing computer memory and time requirements. Secondly, residue arithmetic computations necessitate significantly larger execution times (by factors of 2 to 12) and somewhat larger computer memory requirements than do equivalent rational arithmetic computations.
Further research needs include the development of additional software to facilitate exact arithmetic use, the development of hardware geared to exact computation, research on detection of pseudo-overflow in residue arithmetic, and the determination of the most efficient means of incorporating exact computation in specific iterative numerical algorithms.
Recommended Citation
Smyre, John L., "Exact computation using extended-precision single-modulus residue arithmetic. " Master's Thesis, University of Tennessee, 1983.
https://trace.tennessee.edu/utk_gradthes/14912