One of my bright students of IITJEE Math asked me this question just now: when can we switch integrals or an infinite series and an integral or two limits or two infinite series?

Most of the time, students routinely do such things without questioning whether it is a valid operation. It is almost the same as in matrices: .

My student’s question hits the bull’s eye: what is analysis and why do analysis? To answer these two questions, or to satisfy your curiosity, I am reproducing the answer from * Prof. Terence Tao’s text Analysis Volume I*. It is just for the purpose of sharing with budding minds…

**What is analysis?**

Real Analysis deals with the analysis of real numbers, sequences and series of real numbers, and real-valued functions. This is related to, but is distinct from, complex analysis, which concerns the analysis of the complex numbers and complex numbers, harmonic analysis, which concerns the analysis of harmonics (waves0 such as sine waves, and how they synthesize other functions via the Fourier transform, functional analysis, which focuses much more heavily on functions (and how they can form things like vector spaces), and so forth. Analysis is the rigorous study of such objects, with a focus on trying to pin down precisely and accurately the qualitative and quantitative behaviour of these objects. Real analysis is the theoretical foundation which underlies calculus, which is the collection of computational algorithms which one uses to manipulate functions.

In Real Analysis, we study many objects which will be familiar to you from freshman calculus: numbers, sequences, series, limits, functions, definite integrals, derivatives and so forth. You already have a great deal of experience of computing with these objects; however, in Real Analysis we focus more on the underlying theory for these objects. In Real Analysis, we are concerned with questions such as the following:

1) What is a real number? Is there a largest real number? After 0, what is the “next” real number(i.e., what is the smallest positive real number)? Can you cut a real number into pieces infinitely many times? Why does a number such as 2 have a square root, but a number such as -2 does not? If there are infinitely many reals and infinitely many rationals, how come there are “more” real numbers than rational numbers?

2) How do you take the limit of a sequence of real numbers? Which sequences have limits and which ones don’t? If you can stop a sequence from escaping to infinity, does this mean that it must eventually settle down and converge? Can you add infinitely many real numbers together and still get a finite real number? Can you add infinitely many rational numbers together and end up with a non-rational number? if you rearrange the elements of an infinite sum, is the sum still the same?

3) What is a function? What does it mean for a function to be continuous? Differentiable? Integrable? Bounded? Can you add infinitely many functions together? What about taking limits of sequences of functions? Can you differentiate an infinite series of functions? What about integrating? If a function takes the value 3 when and 5 when (that is, ), does it have to take every intermediate value between 3 and 5 when x goes between 0 and 1? Why?

You may already know answers to some of these questions from your calculus classes, but most likely these sorts of issues were only of secondary importance to these courses; the emphasis was on getting you to perform computations, such as computing the integral of from to . But, now that you are comfortable with these objects, so real analysis investigates what is really going on.

**Why do analysis?**

It is a fair question to ask, “why bother?”, when it comes to analysis. There is a certain philosophical satisfaction in knowing *why *things work, but a pragmatic person may argue that one only needs to know *how *things work to do real life problems. The calculus training you receive in introductory classes is certainly adequate for you to begin solving many problems in physics, chemistry, biology, economics, computer science, finance, engineering, or whatever else you end up doing — and you can certainly use things like chain rule, L’Hopital’s Rule, or integration by parts without knowing why these rules work, or whether there are any exceptions to these rules. However, one can get into trouble if one applies rules without knowing where they came from and what the limits of their appllicability are. Below are some examples in which several of these familiar rules, if applied blindly without knowledge of the underlying analysis, can lead to disaster.

**Example 1. (Division by zero). **This is a very familiar one to you: the cancellation law does not work when . For instance, the identity is true, but if one blindly cancels the 0 then one obtains

, which is false. In this case, it was obvious that one was dividing by zero; but in other cases it can be more hidden. (For example, refer to my blog article

https://mathhothouse.wordpress.com/2014/07/22/math-basics-division-by-zero-3/)

**Example 2 (Divergent Series). **You have probably seen geometric series such as the infinite sum

You have probably seen the following trick to sum the series: if we call the above sum S, then if we multiply both sides by 2, we obtain

and hence, , so the series sums to 2. However, if you apply the same trick to the series

one gets nonsensical results

So the same reasoning that shows that also gives that

. Why is it that we trust the first equation but not the second? A similar example arises with the series

we can write

and hence that , or instead we can write

and hence, that ; or instead we can write

and hence that . Which one is correct?

**Example 3. (Divergent Sequence)** Here is a slight variation of the previous example. Let x be a real number, and let L be the limit

Changing the variables . we have

which equals .

But, if , then , thus

and thus

At this point, we could cancel the L’s and conclude that for an arbitrarily real number x, which is absurd. But, since we are already aware of the division by zero problem, we could be a little smarter and conclude instead that either , or . in particular, we seem to have shown that

for all

But, this conclusion is absurd if we apply it to certain values of x, for instance by specializing to the case we could conclude that the sequence 1,2,4,8,… converges to zero, and by specializing to the case , we conclude that the sequence also converges to zero. These conclusions appear to be absurd; what is the problem with the above argument?

**Ecample 4. ****(Limiting values of functions). **Start with the expression

, make the change of variable and recall that

to obtain

which equals

Since , we thus have

and hence

.

If we then make the change of variables and recall that we conclude that .

Squaring both sides of these limits and adding we see that

.

On the other hand, we have for all x. Thus, we have shown that What is the difficulty here?

**Example 5. (Interchanging** **integrals) **The interchanging of integrals is a trick which occurs in math just as commonly as the interchanging of sums. Suppose one wants to compute the volume under a surface

(let us ignore the limits of integration for the moment). One can do it by slicing parallel to the x-axis: for each fixed value of y, we can compute an area , and then we integrate the area in the y variable to obtain the volume

.

Or we could slice parallel to the y axis for each fixed x and compute an area , and then integrate in the x axis to obtain .

This seems to suggest that one should always be able to swap integral signs:

And, indeed people swap integral signs all the time, because sometimes one variable is easier to integrate in first than the other. However, just as infinite sums sometimes cannot be swapped, integrals are also sometimes dangerous to swap. An example is with the integrand . Suppose we believe that we can swap the integrals:

which equals

Since ,

the left hand side is . But, since

,

the right hand side is . Clearly , so there is an error somewhere; but, you won’t find one anywhere except in the step where we interchanged the integrals. So, how do we know when to trust the interchange of integrals?

**Example 6. (Interchanging the limits). **Suppose we start with the plausible looking statement

which equals

.

But, we have ,

so the LHS is 1; on the other hand, we have

.

so the RHS is o. Since 1 is clearly not equal to zero, this suggests that interchange of limits is untrustworthy. But are there any other circumstances in which the interchange of limits is legitimate?

**Example 7. (Interchanging limits again). **Consider the plausible looking statement

which equals

where the notation means that x is approaching 1 from the left. When x is to the left of 1, then , and hence the left hand side is zero. But, we also have

for all n, and so the right hand side limit is 1. Does this demonstrate that this type of limit interchange is always trustworthy?

**More examples, later…**

**Nalin Pithwa**