An Introduction to Fourier Theory

by Forrest Hoffman
This paper is also available in DVI, and PostScript® formats.

Table of Contents


Introduction

Linear transforms, especially Fourier and Laplace transforms, are widely used in solving problems in science and engineering. The Fourier transform is used in linear systems analysis, antenna studies, optics, random process modeling, probability theory, quantum physics, and boundary-value problems (Brigham, 2-3) and has been very successfully applied to restoration of astronomical data (Brault and White). The Fourier transform, a pervasive and versatile tool, is used in many fields of science as a mathematical or physical tool to alter a problem into one that can be more easily solved. Some scientists understand Fourier theory as a physical phenomenon, not simply as a mathematical tool. In some branches of science, the Fourier transform of one function may yield another physical function (Bracewell, 1-2).


The Fourier Transform

The Fourier transform, in essence, decomposes or separates a waveform or function into sinusoids of different frequency which sum to the original waveform. It identifies or distinguishes the different frequency sinusoids and their respective amplitudes (Brigham, 4). The Fourier transform of f(x) is defined as

F(s) = integral from -infinity to infinity f(x) exp(-i 2pixs) dx.

Applying the same transform to F(s) gives

f(w) = integral from -infinity to infinity F(s) exp(-i 2piws) ds.

If f(x) is an even function of x, that is f(x) = f(-x), then f(w) = f(x). If f(x) is an odd function of x, that is f(x) = -f(-x), then f(w) = f(-x). When f(x) is neither even nor odd, it can often be split into even or odd parts.

To avoid confusion, it is customary to write the Fourier transform and its inverse so that they exhibit reversibility:

F(s) = integral from -infinity to infinity f(x) exp(-i 2pixs) dx

f(x) = integral from -infinity to infinity F(s) exp(i 2pixs) ds

so that

f(x) = integral from -infinity to infinity [ integral from
-infinity to infinity f(x) exp(-i 2pixs) dx] exp(i 2pixs) ds

as long as the integral exists and any discontinuities, usually represented by multiple integrals of the form ½[f(x+) + f(x-)], are finite. The transform quantity F(s) is often represented as f-tilde(s) and the Fourier transform is often represented by the operator F (Bracewell, 6-8).

There are functions for which the Fourier transform does not exist; however, most physical functions have a Fourier transform, especially if the transform represents a physical quantity. Other functions can be treated with Fourier theory as limiting cases. Many of the common theoretical functions are actually limiting cases in Fourier theory.

Usually functions or waveforms can be split into even and odd parts as follows

f(x) = E(x) + O(x)

where

E(x) = ½ [f(x) + f(-x)]

O(x) = ½ [f(x) - f(-x)]

and E(x), O(x) are, in general, complex. In this representation, the Fourier transform of f(x) reduces to

2integral from 0 to infinityE(x) cos(2pixs) dx - 2iintegral from 0 to infinityO(x) sin(2pixs) dx

It follows then that an even function has an even transform and that an odd function has an odd transform. Additional symmetry properties are shown in Table 1 (Bracewell, 14).

Table 1: Symmetry Properties of the Fourier Transform

    function                        transform
-----------------------------------------------------------
real and even                   real and even
real and odd                    imaginary and odd
imaginary and even              imaginary and even
complex and even                complex and even
complex and odd                 complex and odd
real and asymmetrical           complex and asymmetrical
imaginary and asymmetrical      complex and asymmetrical
real even plus imaginary odd    real
real odd plus imaginary even    imaginary
even                            even
odd                             odd
An important case from Table 1 is that of an Hermitian function, one in which the real part is even and the imaginary part is odd, i.e., f(x) = f*(-x). The Fourier transform of an Hermitian function is even. In addition, the Fourier transform of the complex conjugate of a function f(x) is F*(-s), the reflection of the conjugate of the transform.

The cosine transform of a function f(x) is defined as

Fc(s) = 2integral from 0 to infinityf(x) cos 2pisx dx.

This is equivalent to the Fourier transform if f(x) is an even function. In general, the even part of the Fourier transform of f(x) is the cosine transform of the even part of f(x). The cosine transform has a reverse transform given by

f(x) = 2integral from 0 to infinityFc(s) cos 2pisx ds.

Likewise, the sine transform of f(x) is defined by

FS(s) = 2integral from 0 to infinityf(x) sin 2pisx dx.

As a result, i times the odd part of the Fourier transform of f(x) is the sine transform of the odd part of f(x).

Combining the sine and cosine transforms of the even and odd parts of f(x) leads to the Fourier transform of the whole of f(x):

FTf(x) = FTCE(x) - i FTSO(x)

where FT, FTC, and FTS stand for -i times the Fourier transform, the cosine transform, and the sine transform respectively, or

F(s) = ½FC(s) - ½iFS(s)

(Bracewell, 17-18).

Since the Fourier transform F(s) is a frequency domain representation of a function f(x), the s characterizes the frequency of the decomposed cosinusoids and sinusoids and is equal to the number of cycles per unit of x (Bracewell, 18-21). If a function or waveform is not periodic, then the Fourier transform of the function will be a continuous function of frequency (Brigham, 4).


The Two Domains

It is often useful to think of functions and their transforms as occupying two domains. These domains are referred to as the upper and the lower domains in older texts, ``as if functions circulated at ground level and their transforms in the underworld'' (Bracewell, 135). They are also referred to as the function and transform domains, but in most physics applications they are called the time and frequency domains respectively. Operations performed in one domain have corresponding operations in the other. For example, as will be shown below, the convolution operation in the time domain becomes a multiplication operation in the frequency domain, that is, f(x) * g(x) F(s) G(s). The reverse is also true, F(s) * G(s) f(x) g(x). Such theorems allow one to move between domains so that operations can be performed where they are easiest or most advantageous.


Fourier Transform Properties

Scaling Property

If FT{f(x)} = F(s) and a is a real, nonzero constant, then

FT{f(ax)} = integral from -infinity to infinity f(ax) exp(i 2pisx) dx

= |a|-1 integral
from -infinity to infinity f([beta]) exp(i 2pis[beta]/a) d[beta]

= |a|-1 F(s/a).

From this, the time scaling property, it is evident that if the width of a function is decreased while its height is kept constant, then its Fourier transform becomes wider and shorter. If its width is increased, its transform becomes narrower and taller.

A similar frequency scaling property is given by

FT{|a|-1 f(x/a)} = F(as).

Shifting Property

If FT{f(x)} = F(s) and x0 is a real constant, then

FT{f(x - x0)} = integral from -infinity to infinity f(x - x0) exp(i 2pisx) dx

= integral from -infinity to infinity f([beta]) exp(i 2pis ([beta] + x0)) d[beta]

= exp(i 2pix0s) integral from -infinity to infinity f([beta]) exp(i 2pis[beta]) d[beta]

= F(s) exp(i 2pix0s).

This time shifting property states that the Fourier transform of a shifted function is just the transform of the unshifted function multiplied by an exponential factor having a linear phase.

Likewise, the frequency shifting property states that if F(s) is shifted by a constant s0, its inverse transform is multiplied by exp(i 2pixs0)

FT{f(x) exp(i 2pixs0)} = F(s-s0).

Convolution Theorem

We now derive the aforementioned time convolution theorem. Suppose that g(x) = f(x) * h(x). Then, given that FT{g(x)} = G(s), FT{f(x)} = F(s), and FT{h(x)} = H(s),

G(s) = FT{f(x) * h(x)}

= FT{integral from -infinity to infinity f([beta]) h(x - [beta]) d[beta] }

= integral from -infinity to infinity [ integral from -infinity
to infinity f([beta]) h(x - [beta]) d[beta] ] exp(-i 2pisx) dx

= integral from -infinity to infinity f([beta]) [ integral from -infinity to
infinity h(x - [beta]) exp(-i 2pisx) dx ] d[beta]

= H(s) integral from -infinity to infinity f([beta]) exp(-i 2pis[beta]) d[beta]

= F(s) H(s).

This extremely powerful result demonstrates that the Fourier transform of a convolution is simply given by the product of the individual transforms, that is

FT{f(x) * h(x)} = F(s) H(s).

Using a similar derivation, it can be shown that the Fourier transform of a product is given by the convolution of the individual transforms, that is

FT{f(x) h(x)} = F(s) * H(s)

This is the statement of the frequency convolution theorem (Gaskill, 194-197; Brigham, 60-65).

Correlation Theorem

The correlation integral, like the convolution integral, is important in theoretical and practical applications. The correction integral is defined as

h(x) = integral from -infinity to infinityf(u) g(x+u) du

and like the convolution integral, it forms a Fourier transform pair given by

FT{h(x)} = F(s) G*(s).

This is the statement of the correlation theorem. If f(x) and g(x) are the same function, the integral above is normally called the autocorrelation function, and the crosscorrelation if they differ (Brigham, 65-69). The Fourier transform pair for the autocorrelation is simply

FT [ integral from
-infinity to infinity f(u) f(x+u) du ] = |F |2.


Parseval's Theorem

Parseval's Theorem states that the power of a signal represented by a function h(t) is the same whether computed in signal space or frequency (transform) space; that is,

integral from -infinity to infinity h2(t) dt = integral from -infinity to infinity |H(f) |2 df

(Brigham, 23). The power spectrum, P(f), is given by

P(f) = |H(f) |2, -infinity less than or equal f less than or equal +infinity.


Sampling Theorem

A bandlimited signal is a signal, f(t), which has no spectral components beyond a frequency B Hz; that is, F(s) = 0 for |s| > 2piB. The sampling theorem states that a real signal, f(t), which is bandlimited to B Hz can be reconstructed without error from samples taken uniformly at a rate R > 2B samples per second. This minimum sampling frequency, Fs = 2B Hz, is called the Nyquist rate or the Nyquist frequency. The corresponding sampling interval, T = 1/2B (where t = nT), is called the Nyquist interval. A signal bandlimited to B Hz which is sampled at less than the Nyquist frequency of 2B, i.e., which was sampled at an interval T > 1/2B, is said to be undersampled.


Aliasing

A number of practical difficulties are encountered in reconstructing a signal from its samples. The sampling theorem assumes that a signal is bandlimited. In practice, however, signals are timelimited, not bandlimited. As a result, determining an adequate sampling frequency which does not lose desired information can be difficult. When a signal is undersampled, its spectrum has overlapping tails; that is F(s) no longer has complete information about the spectrum and it is no longer possible to recover f(t) from the sampled signal. In this case, the tailing spectrum does not go to zero, but is folded back onto the apparent spectrum. This inversion of the tail is called spectral folding or aliasing (Lathi, 532-535).


Figure 1: Undersampled, oversampled, and critically-sampled unit area gaussian curves.

As an example, Figure 1 shows a unit gaussian curve sampled at three different rates. The FFT (or Fast Fourier Transform) of the undersampled gaussian appears flattened and its tails do not reach zero. This is a result of aliasing. Additional spectral components have been folded back onto the ends of the spectrum or added to the edges to produce this curve.

The FFT of the oversampled gaussian reaches zero very quickly. Much of its spectrum is zero and is not needed to reconstruct the original gaussian.

Finally, the FFT of the critically-sampled gaussian has tails which reach zero at their ends. The data in the spectrum of the critically-sampled gaussian is just sufficient to reconstruct the original. This gaussian was sampled at the Nyquist frequency.

Figure 1 was generated using IDL with the following code:

  !P.Multi=[0,3,2]
  a=gauss(256,2.0,2) ; undersampled
  fa=fft(a,-1)
  b=gauss(256,2.0,0.1) ; oversampled
  fb=fft(b,-1)
  c=gauss(256,2.0,0.8) ; critically sampled
  fc=fft(c,-1)
  plot,a,title='!6Undersampled Gaussian'
  plot,b,title='!6Oversampled Gaussian'
  plot,c,title='!6Critically-Sampled Gaussian'
  plot,shift(abs(fa),128),title='!6FFT of Undersampled Gaussian'
  plot,shift(abs(fb),128),title='!6FFT of Oversampled Gaussian'
  plot,shift(abs(fc),128),title='!6FFT of Critically-Sampled Gaussian'
The gauss function is as follows:

  function gauss,dim,fwhm,interval
  ;
  ; gauss - produce a normalized gaussian curve centered in dim data
  ;         points with a full width at half maximum of fwhm sampled
  ;         with a periodicity of interval
  ;
  ; dim      = the number of points
  ; fwhm     = full width half max of gaussian
  ; interval = sampling interval
  ;
  center=dim/2.0 ; automatically center gaussian in dim points
  x=findgen(dim)-center
  sigma=fwhm/sqrt(8.0 * alog(2.0))  ; fwhm is in data points
  coeff=1.0 / ( sqrt(2.0*!Pi) * (sigma/interval) )
  data=coeff * exp( -(interval * x)^2.0 / (2.0*sigma^2.0) )
  return,data
  end

Discrete Fourier Transform (DFT)

Because a digital computer works only with discrete data, numerical computation of the Fourier transform of f(t) requires discrete sample values of f(t), which we will call fk. In addition, a computer can compute the transform F(s) only at discrete values of s, that is, it can only provide discrete samples of the transform, Fr. If f(kT) and F(rs0) are the kth and rth samples of f(t) and F(s), respectively, and N0 is the number of samples in the signal in one period T0, then

fk = T f(kT) = T0N0-1 f(kT)

and

Fr = F(rs0)

where

s0 = 2pi F0 = 2pi T0-1.

The discrete Fourier transform (DFT) is defined as

Fr = sum from k=0 to N_0-1 fk exp(-i rOmega0k)

where Omega0 = 2pi N0-1. Its inverse is

fk = N0-1 sum from r=0 to N_0-1 Fr exp(i rOmega0k).

These equations can be used to compute transforms and inverse transforms of appropriately-sampled data. Proofs of these relationships are in Lathi (546-548).


Fast Fourier Transform (FFT)

The Fast Fourier Transform (FFT) is a DFT algorithm developed by Tukey and Cooley in 1965 which reduces the number of computations from something on the order of N02 to N0 log N0. There are basically two types of Tukey-Cooley FFT algorithms in use: decimation-in-time and decimation-in-frequency. The algorithm is simplified if N0 is chosen to be a power of 2, but it is not a requirement.


Summary

The Fourier transform, an invaluable tool in science and engineering, has been introduced and defined. Its symmetry and computational properties have been described and the significance of the time or signal space (or domain) vs. the frequency or spectral domain has been mentioned. In addition, important concepts in sampling required for the understanding of the sampling theorem and the problem of aliasing have been discussed. An example of aliasing was provided along with a short description of the discrete Fourier transform (DFT) and its popular offspring, the Fast Fourier Transform (FFT) algorithm.


References

Blass, William E. and Halsey, George W., 1981, Deconvolution of Absorption Spectra, New York: Academic Press, 158 pp.

Bracewell, Ron N., 1965, The Fourier Transform and Its Applications, New York: McGraw-Hill Book Company, 381 pp.

Brault, J. W. and White, O. R., 1971, The analysis and restoration of astronomical data via the fast Fourier transform, Astron. & Astrophys., 13, pp. 169-189.

Brigham, E. Oren, 1988, The Fast Fourier Transform and Its Applications, Englewood Cliffs, NJ: Prentice-Hall, Inc., 448 pp.

Cooley, J. W. and Tukey, J. W., 1965, An algorithm for the machine calculation of complex Fourier series, Mathematics of Computation, 19, 90, pp. 297-301.

Gabel, Robert A. and Roberts, Richard A., 1973, Signals and Linear Systems, New York: John Wiley & Sons, 415 pp.

Gaskill, Jack D., 1978, Linear Systems, Fourier Transforms, and Optics, New York: John Wiley & Sons, 554 pp.

Lathi, B. P., 1992, Linear Systems and Signals, Carmichael, Calif: Berkeley-Cambridge Press, 656 pp.


Physics 641- Instrument Design and Signal Enhancement / Forrest Hoffman