>
```> import Complex  -- needed only to test polymorphic versions

------ Most Efficient Solution (Strictly Optional Approach) ------
Polymorphic: argument of can have type
Float, Double, Rational, Ratio Int,
Complex Float, or Complex Double
Efficiency: number of arithmetic operations used
proportional to number of terms in series
Space: uses constant space, independent of number of terms
(foldl, instead of foldr, is the key here)

function: cosine
delivers an approximation to the cosine of its argument

> cosine :: Fractional num => num -> num
> cosine = foldl (+) 0 . pairDifferences .
>               take numberOfTerms . cosineTerms

function: cosineTerms
delivers a sequence containing approximations to the terms of the
Maclaurin series expansion of cosine

> cosineTerms :: Fractional num => num -> [num]
> cosineTerms = scanl (*) 1 . products . takeThemTwoByTwo . harmonics

function: harmonics
delivers [x, x/2, x/3, ...], where x is the argument

> harmonics :: Fractional num => num -> [num]
> harmonics x = [x/(fromIntegral k) | k <- [1 ..]]

function: pairDifferences
delivers the differences of each non-overlapping, adjacent pair
of numbers from its argument, which is a sequence of numbers

> pairDifferences :: Num num => [num] -> [num]
> pairDifferences xs =
>   [foldr (-) 0 adjacentPair | adjacentPair <- takeThemTwoByTwo xs]

function: products
given a sequence of numeric sequences, delivers the
products formed from each of the numeric sequences

> products :: Num num => [[num]] -> [num]
> products xss = [product xs | xs <- xss]

function: takeThemTwoByTwo
delivers a sequence of two-element sequences made by partioning
the argument sequence into adjacent pairs

> takeThemTwoByTwo :: [a] -> [[a]]
> takeThemTwoByTwo xs =
>   takeWhile (not . null) [take 2 ys | ys <- iterate (drop 2) xs]

----------------  Inefficient, But Still Polymorphic -------------
Polymorphic: argument can have type
Float, Double, Rational, Ratio Int,
Complex Float, or Complex Double
Efficiency: number of arithmetic operations used
proportional to square of number of
terms in series
Space: uses space proportional to the number of terms

function: cosineSlowly
delivers an approximation to the cosine of its argument,
using cosineTermsSlowly

> cosineSlowly :: Fractional num => num -> num
> cosineSlowly = alternatingSum . take numberOfTerms . cosineTermsSlowly

function: cosineTermsSlowly
delivers a sequence containing approximations to the terms of the
Maclaurin series expansion of cosine
computes term n as a product of n factors

> cosineTermsSlowly :: Fractional num => num -> [num]
> cosineTermsSlowly x =
>   [(product . take (2*n) . harmonics) x | n <- [0 ..]]

function: alternatingSum
delivers an alternating sum of the numbers in a sequence:
e.g., alternatingSum [w, x, y, z] = w - x + y - z

> alternatingSum :: Num num => [num] -> num
> alternatingSum = foldr (-) 0

-------------------  Inefficient, Not Polymorphic ----------------
(expected solution approach)
Monomorphic: argument must have type Float
Efficiency: number of arithmetic operations used
proportional to square of number of
terms in series
Space: uses space proportional to the number of terms

function: cosineSlowlyWithoutTypePain
delivers an approximation to the cosine of its argument,
using cosineTermsSlowlyWithoutTypePain

> cosineSlowlyWithoutTypePain :: Float -> Float
> cosineSlowlyWithoutTypePain =
>   alternatingSum . take numberOfTerms . cosineTermsSlowlyWithoutTypePain

function: cosineTermsSlowlyWithoutTypePain
delivers a sequence containing approximations to the terms of the
Maclaurin series expansion of cosine
computes term n as a product of n factors

> cosineTermsSlowlyWithoutTypePain :: Float -> [Float]
> cosineTermsSlowlyWithoutTypePain x =
>   [(product . take (2*n) . harmonicsWithoutTypePain) x | n <- [0 ..]]

function: harmonicsWithoutTypePain
delivers [x, x/2, x/3, ...], where x is the argument

> harmonicsWithoutTypePain :: Float -> [Float]
> harmonicsWithoutTypePain x = [x/k | k <- [1 ..]]

-------------------  Hyper-inefficient, Not Polymorphic -------------
But, unlike the other methods, does deliver the right answer.
Monomorphic: argument must have type Rational
Efficiency: number of arithmetic operations used
proportional to square of u, where u is
the product of the number of terms in the
series times the logarithm of the largest
of the values {x^n | n <- {1 .. numberOfTerms}}
and {n! | n <- {1 .. numberOfTerms}}
Space: takes lots of space, too -- to retain all those digits
in x^n and n! for all those values of n
(runs with a 5M heap in Hugs 1.3: hugs -h5000000)

function: cosineByBruteForce
delivers an approximation to the cosine of its argument

> cosineByBruteForce :: Rational -> Rational
> cosineByBruteForce x =
>   foldr (+) 0 [((-1)^n) * x^(2*n) / toRational(factorial(2*n)) |
>                               n <- [0 .. (fromIntegral numberOfTerms)]]

function: factorial
delivers the factorial of its argument

> factorial :: Integer -> Integer
> factorial n = product[1 .. n]

----------------------- Named Numbers  ---------------------

variable: numberOfTerms
number of terms of the Maclaurin series used in the approximation
of the cosine

> numberOfTerms :: Int
> numberOfTerms = 51

variable: piApproximationRational
pi approximation based on intrinsic floating point approximation

> piApproximationRational :: Rational
> piApproximationRational = fromRealFrac pi

variable: piApproximationComplex
pi approximation based on intrinsic floating point approximation

> piApproximationComplex :: Complex Float
> piApproximationComplex = fromRealFrac pi

------ Why the Cosine Appoximation Goes Bad For Large Arguments ------

The sizes of the terms x^n/n! involved in the approximation to the
cosine vary widely. Some are very small, on the order of 10^(-6) to
10^(-9). Others are very large; there are several as large as 10^12.
These are the true sizes of the numbers involved, not an effect of
imprecise calculation. But, the fact that they vary so widely makes
it impossible to use these numbers to estimate the cosine unless
they are calculated to a very high precision.

Numbers of type Float carry only six to seven digits of precision.
So, a number of type Float that has over twelve digits in front
of its decimal point will have random garbage in half of them.
Since the value of the cosine has at most one digit in front of
the decimal point, it is hopeless to try to approximate it with a
sum containing terms that have garbage even in one digit before
the decimal point, let alone six. Precision cannot be regained by
adding on some precise terms after it has been lost by adding in
some imprecise terms.

A simple example: Consider two terms s=123456751 and t=123456749.
Then s-t=2. But the best approximation to s using only 7 digits
is s~ = 123456800, and the best 8-digit approximation to t is
t~ = 123456700. Under these circumstances, the calculation of
s-t will approximate this difference as 100 (that is s~ - t~).
Not too close, eh?

This is the sort of thing that happens while computing the sum of
the terms in the cosine series. They simply cannot be computed
with enough precision to avoid losing significant digits that
would be needed to compute a good approximation. The problem is
endemic in computing with numbers containing a fixed, finite
amount of precision. It cannot be avoided. It is a fact of life.

Good approximations to the cosine function must avoid computing
large numbers that will be added to other numbers. How can this
be done? This is a topic of study in a field of computer science
known as numerical analysis.
```
Last Modified: