Scientific Computation

Scientific Computation is described as using computational techniques to solve complex problems. That is a very broad definition, and only when you see some examples of SciComp (as we like to call it in slang), you would be able to get a better picture of what we're dealing with.

In this article, we will be going through a series of examples of scientific computing and provide you links to expand your knowledge and learn more. Two vital concepts, Machine Precision and Nyquist Sampling Theorem are usually unknown to people who start with scientific computation and are major sources of error.

You can also read about Monte Carlo Simulations.

After the basics, you can go over our Numpy and MATLAB guide to get started!

To learn about computation, go to our High Performance and Distributed Computing guide!

This YouTube channel makes videos to teach math to programmers, but as it turns out these videos can be used by anyone to learn programming as well, particularly helpful for SciComp.

Contents

Examples

Lorenz Maps

An interesting exercise in scientific computation where you have to draw Lorenz maps on your computer is demonstrated by the beautiful piece of code given in the linked article.

The Three Body Problem

Two Body Problem

I think it is a better idea to demonstrate two body problem and leave three-body as an exercise to the reader. Sorry for the clickbait title. The governing equations are given by

Plot of two bodies revolving around each other

First import everything necessary,

1 from scipy.integrate import odeint
2 import matplotlib.pyplot as plt
3 import numpy as np

and now we define the set of first order differential equations which will govern the system

 4 m1 = 1
 5 m2 = 1
 6 
 7 def Twobody(state, t):
 8     x1 = np.array([state[0], state[1], state[2]])
 9     v1 = np.array([state[3], state[4], state[5]])
10     x2 = np.array([state[6], state[7], state[8]])
11     v2 = np.array([state[9], state[10], state[11]])
12 
13     r = x2-x1
14     r = np.sqrt(np.sum(r*r))
15 
16     x1d = v1
17     v1d = m2*(x2-x1)/(r*r*r)
18     x2d = v2
19     v2d = m1*(x1-x2)/(r*r*r)
20 
21     return [x1d[0],x1d[1],x1d[2],v1d[0],v1d[1],v1d[2],x2d[0],x2d[1],x2d[2],v2d[0],v2d[1],v2d[2]]
22 
23 state0 = [0.0, 0.0, 0.0, 0.0,0.0,0.0, 1,0,0, 0.0,1.0,0.0]

and now we integrate

24 t = np.arange(0.0,100.0, 0.01)
25 
26 state = odeint(Twobody, state0, t)

and plot

27 from mpl_toolkits.mplot3d import Axes3D
28 fig = plt.figure()
29 ax = fig.gca(projection='3d')
30 ax.plot(state[:,6]-state[:,0], state[:,7]-state[:,1], state[:,8]-state[:,2])
31 ax.set_xlabel('x')
32 ax.set_ylabel('y')
33 ax.set_zlabel('z')
34 plt.show()

Running this code will give you Figure 1

Relaxation

Relaxation are iterative methods of solving problems. Following is a FORTRAN code snippet to do Jacobi integration for solving Laplace equation.

 1 PROGRAM LAPLACE
 2     INTEGER, PARAMETER :: XSIZE=101,YSIZE=101
 3     REAL(8), DIMENSION(1:XSIZE,1:YSIZE) :: PHIOLD,PHINEW,ERROR
 4     REAL :: MAXDELTA
 5     REAL(8) :: ERRORSUM,NSQ=XSIZE*YSIZE
 6 
 7     REAL(8), PARAMETER :: CONVERGENCE = EPSILON(ERRORSUM)
 8     INTEGER :: ITERATIONS = 0
 9 
10     INTEGER :: I,J
11 
12     DO I=1,XSIZE
13         DO J=1,YSIZE
14             PHINEW(I,J) = (I*I-J*J)/(NSQ)
15         END DO
16     END DO
17 
18     PHINEW(2:XSIZE-1,2:YSIZE-1)=0
19 
20     open(10,file='jacobi_101.csv')
21 
22     JACOBI_ITERATION: DO
23         ITERATIONS = ITERATIONS+1
24         PHIOLD=PHINEW
25 
26         PHINEW(2:XSIZE-1,2:YSIZE-1)=0.25*(&
27             &PHIOLD(1:XSIZE-2,2:YSIZE-1)+&
28             &PHIOLD(3:XSIZE,2:YSIZE-1)+&
29             &PHIOLD(2:XSIZE-1,1:YSIZE-2)+&
30             &PHIOLD(2:XSIZE-1,3:YSIZE))
31         ERROR = PHIOLD-PHINEW
32         ERRORSUM = SQRT(SUM(ERROR*ERROR)/NSQ)
33         write(10,*) ERRORSUM, ITERATIONS
34         IF (ERRORSUM < 2*CONVERGENCE) EXIT JACOBI_ITERATION
35 
36      END DO JACOBI_ITERATION
37 
38     close(10)
39 END PROGRAM LAPLACE

Machine Precision

Computers use bits to store information. They can be physically represented by any device which can act as a switch. It is either ON (1) or OFF (0) representing the two states of the bits. In your computer, they're manifested by transistors, lots of them packed in a microprocessor. Now there are different ways you can represent numbers using these bits. Why do you need to represent numbers? Because you want to do calculations with them.

Assuming you're familiar with the binary number system, you can represent any natural number using 0s and 1s, but what are you going to do about rational numbers? You might need an indefinite number of bits to represent them, but you can write the numerator and denominator separately solving this problem. But then what about real numbers? There is absolutely no way you can get away with representing an arbitrary real number with a finite number of bits. Hence, you approximate. You use a fixed set of bits with a well-defined protocol to represent your numbers in an approximate manner. Under this framework, you have to accept that you cannot represent all real numbers. Your number line has gaps. The width of this gap is called the machine precision or the epsilon of the machine.

All real numbers can be denoted as floating point numbers and in most of the machines that you'll encounter in your everyday life represent floating point numbers using the IEEE 754 standard. To learn more about them, read this article on What Every Scientist Should Know About Floating Point Arithmetic.

Here we demonstrate a way to find the epsilon for float type in Python:

1 eps = 1.0
2 while 1 + eps > 1:
3     eps = eps/2
4  
5 print eps

This code returns

1.11022302463e-16

which is our answer. When eps is as small as the above number, 1 added to eps is same as 1. Thus, eps is below the least count of the arithmetic which our machine can support withing its representation of floating point representations.

Thus, in your scientific computation program, your arithmetic and comparisons will always have an error of the order of the machine precision, and in some cases, the nature of your program can increase the error by multiple orders of magnitude. Thus, you often need to be extra careful.

Nyquist Sampling Theorem

It might not be a good idea to take an electrical engineering class here and explain the Nyquist-Shannon Sampling Theorem. You can google and learn more about it before you proceed. I would just give a simple definition and a demonstration on how it can affect your code.

Simply put, the theorem states that your sampling frequency must be at least twice the frequency of the signal you're trying to measure in your input. Now imagine as part of your assignment, you have to plot given the following function

Now, here is given by , where c is the speed of light and d is the size of a resonating cavity, which is usually of the order of a few cms. Thus, a realistic value of is about 1e-10. You write a program using that value of , and you get a plot like Figure 1. But a realistic impression of the plot is shown in Figure 2. What is the problem here?

Figure 1 : http://imgur.com/a/05XlM
Figure 2 : http://imgur.com/a/33J3e

We would be representing our plot data in an array which has discrete values for t. To follow, Nyquist sampling theorem, the difference in subsequent t values should be about 5e-11, and to plot our function from to , we would end up requiring a huge array and an unfeasible amount of computer memory. Rather simply, if you use 1e5 sample from to and remember to rescale your units later, you would get the correct functional nature of the plot. Rescaling, often to follow Nyquist Theorem is hence an important tool for scientific computation.

The code is attached for reference:

 1 import numpy as np
 2 import matplotlib.pyplot as plt
 3 
 4 d = 100
 5 c = 3e8
 6 #TF = 2.0*d/c
 7 TF = 1e-10
 8 
 9 t = np.linspace(0,5*TF,1e5)
10 
11 i = -5
12 ya = 0
13 yb = 0
14 yc = 0
15 
16 while i<=5:
17     ya = ya + np.exp((i*2*np.pi*t/TF)*(1j))
18     i = i+1
19 
20 ya = np.absolute(ya)
21 ya = ya*ya
22 ya = ya/max(ya)
23 
24 plt.figure()
25 plt.xlabel('$t$')
26 plt.ylabel('$|A(t)|^2$')
27 plt.plot(t,ya)
28 plt.legend(['eq amp,ph'], loc=1)
29 plt.show()