Numerical Analysis
Numerical Analysis is the study of algorithms that seek numerical results of problems from the most different areas of human knowledge, mathematically modeled. In general, Numerical Analysis algorithms are divided into direct, recursive and iterative. Iteratives present a succession of steps that converge or not to the approximate value of the exact solution. It is the objective of Numerical Analysis to find sequences that approach the exact values with a minimum number of elementary operations.
One of the oldest mathematical writings is the Babylonian Tablet YBC 7289, which gives a sexagesimal approximation of , the length of the diagonal of a unit square.
Being able to calculate the faces of a triangle (and thus, being able to calculate square roots) is extremely important, for example, in carpentry and construction. On a square wall that is two meters by two meters, a diagonal should measure meters.
Although Numerical Analysis was conceived before computers, as we understand it today, the subject is related to an interdisciplinarity between mathematics and information technology. It is also referred to a lot in Numerical Calculus and so to speak we will not distinguish too much Numerical Analysis from Numerical Calculus .
General Introduction about Numerical Analysis
The objective of the field of numerical analysis is to design and analyze techniques for finding approximate but precise solutions to complex problems, the variety of which is demonstrated below.

Advanced numerical methods are essential for proper weather forecasts.

Calculating the trajectory of an aircraft requires the precise numerical solution of a system of ordinary differential equations.

Car manufacturers can improve the safety of their vehicles using computer simulations of accidents. Such simulations essentially consist of numerically solving partial derivatives.

Hedge funds use tools from all fields of numerical analysis to try to calculate the value of stocks more accurately than others in the market.

Airlines use sophisticated optimization algorithms to define ticket prices, employee payments and fuel needs. Historically, such algorithms were developed within the field of operations research.

Insurance companies use numerical programs for risk analysis.

The rest of this section highlights several important topics for numerical analysis.
History of Numerical Analysis
The field of numerical analysis predates the invention of the computer by centuries. Linear interpolation has been used for over 2000 years. Great mathematicians in the past worked with numerical analysis, which is obvious from the name of important algorithms such as: Newton’s Method, Lagrange’s Polynomial, Gaussian Elimination, or Euler’s Method.
To facilitate manual calculations, large books were produced, with formulas and tables of data such as interpolation points and coefficients of functions. Using these tables, often calculated to the 16th decimal place or beyond, anyone could look at the values and plug them into formulas and find approximate numerical estimations for some functions. This work culminated in a 1964 NIST publication of a book of over 1000 pages edited by Abramowitz and Stegun with a large number of commonly used formulas and functions and their values at various points. Function values are no longer of great use when we have a computer at hand, but the various formulas can still be quite useful.
Mechanical calculators were also developed as a tool for hand calculations. These calculators evolved into electronic computers in the 1940s, when it was realized that these computers would be useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since larger and more complex calculations could be solved.
Direct methods calculate the solution to a problem in a finite number of steps. These methods would result in the precise answer if they were performed with infinite precision. Examples include Gaussian Elimination, the QR factorization method for solving linear systems of equations, and the Simplex Linear Programming Algorithm. In practice, finite precision is used and the result is an approximation of the real solution (assuming stability).
In contrast to direct methods, Iterative Methods do not end at a certain number of steps. Assigned an initial value, iterative methods perform successive approximations that converge to the exact solution in its limit. A convergence test is specified to decide when a sufficiently accurate solution has been found. Even using infinite precision, these methods would (generally) not arrive at the solution in a finite number of steps. Examples include Newton’s Method, Bisection Method, and Jacob’s Interaction. In computer algebra matrices, iterative methods are often needed for complex problems.
Iterative methods are more usual than direct methods in numerical analysis. Some methods are straightforward in principle but are used as if they weren’t; eg Generalized Minimum Residue Method and the Conjugate Gradient Method. For these methods the number of steps necessary to obtain the exact solution is so large that the approximation is accepted in the same way as in the iterative method.
Discretization
Furthermore, continuous problems must sometimes be replaced by discrete problems whose solution is known to be close to that of the continuous problem; this process is called “discretization”. For example, the solution of a Differential Equation is the Function. This function must be represented by a limited amount of data, for example, by its value in a finite number of numbers in its domain, even though its domain is continuous.
Calculation of Function Values
One of the simplest problems is the evaluation of a function at a given point. But even the evaluation of a polynomial is not always trivial: Horner’s scheme is many times more efficient than the obvious method. In general, it is important to estimate and control the rounding error that results from using the floatingpoint system in arithmetic.
Solving Equations and Systems of Equations
Solving Nonlinear Equations
Solving a nonlinear equation basically consists of determining the zeros of f(x)=0 in [a,b].
In order for us to use some numerical method we have to find a range for a zero. To have an idea where zero is located we will have to do a graphical analysis of the function. For example, graphing on a calculator or with computer programs such as Mathematica or MATLAB.
To ensure that the root exists and is unique we have to verify the following theorems:
1) Let f be Continuous on [a,b]. If f(a)f(b)<0}, then there exists at least one c ∈ (a,b), such that f(c)=0}. This is the Intermediate Value Theorem which says more generally that if f is Continuous on [a,b] and f(a) < d < f(b), then there exists c ∈ (a,b), such that f(c ) = d. Take d = 0 and the particular case is the one mentioned above.
2) Let f be Continuous on [a,b]. If f'(x)} exists and has constant sign in [a,b], then f cannot have more than one zero in (a,b). this is equivalent to saying that if f is Strictly Increasing (or Strictly Decreasing) in [a,b], then f cannot have more than one zero in (a,b), that is, the Graph of f cuts the X Axis in just a point that is the ZERO of f.
One of the Numerical Methods for Calculating Zeros in an Interval is the Bisection Method. This method consists of dividing the interval in two. There will be a range where zero will be and another won’t. To locate it we use theorem 1. We reject the interval that does not have zero and we are left with the subinterval that has zero. We repeated this procedure the necessary number of times in order to obtain an error smaller than the intended one.
To find the kth error we use the following formula:
Another Stopping Criterion of this Algorithm that can be used together is that f(xk) < ε , for the kth Iteration, where ε is an ERROR or previously fixed Tolerance – is the desired accuracy for the value of f in xk to be as close to ZERO as you want. The Algorithm may stop when: 1) ba < δ (ERROR, also previously fixed) OR f(xk) < ε ; another situation that can be defined beforehand is that the Algorithm STOP when ba < δ E f(xk) < ε .
Solving Linear Systems
A System of Linear Equations Sn is a set of n Equations with n Unknowns (or Variables) . The Systems of Linear Equations have several applications in Mathematics and Physics , being one of the main themes dealt with by Numerical Calculus .
Generically a System of Linear Equations can be represented as:
Text to be Edited. Wait!
Here, without loss of generality, we have a System with the Number of Equations equal to the Number of Variables (Unknowns), a fact that does not always happen in practice.
Systems of Linear Equations can be solved through Direct Methods (Exact or Analytical) and Iterative Methods ( Approximate – Approximate, when the Method itself already predicts that the Result, regardless of Rounding Errors and Measurements by the use of instruments in obtaining of the Values of the Coefficients of the Variables, will be Approximated to the Exact Solution ).
The Direct Methods, or Exact, make it possible to find the exact solution of a System of Linear Equations from a Finite Number of Operations. In fact, not so Exact (Precise) like that, because there are also Rounding Errors – in the division by ZERO if the number of decimal places is stipulated not too big, an ERROR may occur and the calculations would stop causing what is called an OverFlow ! When using Determinants, for example, to Invert a Matrix using the Method called Cramer’s Rule, this can happen since we will be divided by the values of the Determinants . The determinants, being Multilinear Functions, are extremely sensitive to small Perturbations (modifying the value of a variable by Addition or Multiplication by a number can significantly change the value of the Determinant . And Large Problems, in Engineering, for example, where the Size of the Matrix of Coefficients is significantly large and which if you use, for example, Determinants, these problems will NOT have ROBUSTNESS (they will be Sensitive to small Perturbations in the Coefficients of the Variables .
The Iterative, or Approximate (and/or Approximate) Methods, are those in which the Solution of the System of Linear Equations is obtained from a sequence of successive approximations x(1), x(2), … , x( k) starting from an initial approximation x(0) .