جزییات کتاب
/homepage/sac/cam/na2000/index.html7-Volume Set now available at special set price !Orthogonal polynomials play a prominent role in pure, applied, and computational mathematics, as well as in the applied sciences. It is the aim of the present volume in the series "Numerical Analysis in the 20th Century" to review, and sometimes extend, some of the many known results and properties of orthogonal polynomials and related quadrature rules. In addition, this volume discusses techniques available for the analysis of orthogonal polynomials and associated quadrature rules. Indeed, the design and computation of numerical integration methods is an important area in numerical analysis, and orthogonal polynomials play a fundamental role in the analysis of many integration methods.The 20th century has witnessed a rapid development of orthogonal polynomials and related quadrature rules, and we therefore cannot even attempt to review all significant developments within this volume. We primarily have sought to emphasize results and techniques that have been of significance in computational or applied mathematics, or which we believe may lead to significant progress in these areas in the near future. Unfortunately, we cannot claim completeness even within this limited scope. nevertheless, we hope that the readers of the volume will find the papers of interest and many references to related work of help.We outline the contributions in the present volume. Properties of orthogonal polynomials are the focus of the papers by Marcell?n and ?lvarez-Nodarse and by Freund. The former contribution discusses "Favard's theorem", i.e., the question under which conditions the recurrence coefficients of a family of polynomials determine a measure with respect to which the polynomials in this family are orthogonal. Polynomials that satisfy a three-term recurrence relation as well as Szeg? polynomials are considered. The measure is allowed to be signed, i.e., the moment matrix is allowed to be indefinite. Freund discusses matrix-valued polynomials that are orthogonal with respect to a measure that defines a bilinear form. This contribution focuses on breakdowns of the recurrence relations and discusses techniques for overcoming this difficulty. Matrix-valued orthogonal polynomials form the basis for algorithms for reduced-order modeling. Freund's contribution to this volume provides references to such algorithms and their application to circuit simulation.The contribution by Peherstorfer and Steinbauer analyzes inverse images of polynomial mappings in the complex plane and their relevance to extremal properties of polynomials orthogonal with respect to measures supported on a variety of sets, such as several intervals, lemniscates, or equipotential lines. Applications include fractal theory and Julia etc.Orthogonality with respect to Sobolev inner products has attracted the interest of many researchers during the last decade. The paper by Martinez discusses some of the recent developments in this area. The contribution by L?pez Lagomasino, Pijeira, and Perez Izquierdo deals with orthogonal polynomials associated with measures supported on compact subsets of the complex plane. The location and asymptotic distribution of the zeros of the orthogonal polynomials, as well as the nth-root asymptotic behavior of these polynomials is analyzed, using methods of potential theory.Investigations based on spectral theory for symmetric operators can provide insight into the analytic properties of both orthogonal polynomials and the associated Pad? approximants. The contribution by Beckermann surveys these results.Van Assche and Coussement study multiple orthogonal polynomials. These polynomials arise in simultaneous rational approximation; in particular, they form the foundation for simultaneous Hermite-Pad? approximation of a system of several functions. The paper compares multiple orthogonal polynomials with the classical families of orthogonal polynomials, such as Hermite, Laguerre, Jacobi, and Bessel polynomials, using characterization theorems.Bultheel, Gonz?lez-Vera, Hendriksen, and Nj?stad consider orthogonal rational functions with prescribed poles, and discuss quadrature rules for their exact integration. These quadrature rules may be viewed as extensions of quadrature rules for Szeg? polynomials. The latter rules are exact for rational functions with poles at the origin and at infinity.Many of the papers of this volume are concerned with quadrature or cubature rules related to orthogonal polynomials. The analysis of multi variable orthogonal polynomials forms the foundation of many cubature formulas. The contribution of Cools, Mysovskikh, and Schmid discusses the connection between cubature formulas and orthogonal polynomials. The paper reviews the development initiated by Radon's seminal contribution from 1948 and discusses open questions. The work by Xu deals with multivariate orthogonal polynomials and cubature formulas for several regions in Rd. Xu shows that orthogonal structures and cubature formulas for these regions are closely related.The paper by Milovanovic deals with the properties of quadrature rules with multiple nodes. These rules generalize the Gauss-Tur?n rules. Moment-preserving approximation by defective splines is considered as an application.Computational issues related to Gauss quadrature rules are the topic of the contributions by Ehrich and Laurie. The latter paper discusses numerical methods for the computation of the nodes and weights of Gauss-type quadrature rules, when moments, modified moments, or the recursion coefficients of the orthogonal polynomials associated with a nonnegative measure are known. Ehrich is concerned with how to estimate the error of quadrature rules of Gauss type. This question is important, e.g., for the design of adaptive quadrature routines based on rules of Gauss type.The contribution by Mori and Sugihara reviews the double exponential transformation in numerical integration and in a variety of Sinc methods. This transformation enables efficient evaluation of the integrals of analytic functions with endpoint singularities.Many algorithms for the solution of large-scale problems in science and engineering are based on orthogonal polynomials and Gauss-type quadrature rules. Calvetti, Morigi, Reichel, and Sgallari describe an application of Gauss quadrature to the computation of bounds or estimates of the Euclidean norm of the error in iterates (approximate solutions) generated by an iterative method for the solution of large linear systems of equations with a symmetric matrix. The matrix may be positive definite or indefinite.The computation of zeros of polynomials is a classical problem in numerical analysis. The contribution by Ammar, Calvetti, Gragg, and Reichel describes algorithms based on Szeg? polynomials. In particular, knowledge of the location of zeros of Szeg? polynomials is important for the analysis and implementation of filters for time series.