Together they relate the concepts of derivative and integral to one another, uniting these concepts under the heading of *calculus*, and they connect the antiderivative to the concept of area under a curve.

**FTOC-1** says that the process of calculating a definite integral to find the area under a curve, say between * x=a* and

**FTOC-2** is a little more abstract, but very important. It turns the definite integral into a function, one with the independent variable as the upper limit of integration, that *accumulates area* under a curve. This concept is important because it allows us to create a whole new class of useful functions that are only *defined* by the integral – **integral-defined functions**. One such example is the Gaussian distribution function used in statistics and probability, but many exist. Most importantly, the **FTOC-2 **establishes that differentiation and integration are inverse procedures.

We'll start with **FTOC-1** and in this section we'll use capital letters for functions that are antiderivatives of their lower-case counterparts. So from here on you can assume that **F(x)** is the antiderivative of **f(x)**, **G(x)** is the antiderivative of **g(x)**, and so on. Here's the statement of **FTOC-1**:

*Note that some sources swap the numbering of FTOC-1 and 2 from what I use here. It doesn't matter ... it's the concepts that are important.*

We begin by converting the difference **F(b) - F(a)** into a sum of smaller differences. The figure below shows graphically how this is done. If we plot **F(x)**, we can divide it into segments with endpoints **x _{o} = a, x_{1}, x_{2}**, ... and so on. I've only gone up to

If we calculate the widths of the segments along the y-axis, we find widths of **F(b) - F(x _{4})**,

where the summation is **[F(x _{1}) - F(x_{o})] + [F(x_{2}) - F(x_{1})] + ... + [F(x_{5}) - F(x_{4})]**. Now we can in fact make any number of these partitions, so let's just make this small change to reflect that:

So far we have restated the right side of the **FTOC-1**, **F(b) - F(a)**, as a sum of smaller divisions of the antiderivative function.

The next step is to recall the mean value theorem, which says that for every continuous function on an interval **[a, b]**, there exists a number, **c**, at which the derivative (slope) of the function, **f'(c)** is equal to the average slope between **a** and **b**:

Remember that we really don't care where c is, just that it exists in the interval of interest. We'll rearrange that to

Now the mean value theorem guarantees the *existence* of the point **c** on any interval, including **[x _{i}, x_{i-1}]**, so we can rewrite the MVT like this: There must exist some

Here that is again,

and if we remember that because **F(x)** is an antiderivative of **f(x)**, then **F'(x) = f(x)**, we get

Now if we replace **x _{i} - x_{i-1}** with

In the first part of the proof, we showed that the sum on the left is just **F(b) - F(a)**, so we have

Finally, what's on the right is just a Riemann sum integral of the area under **f(x)**, where the MVT guarantees that there is some point **c** somewhere in each partition, and **Δx** is just the width of the partition. As the width of those partitions (rectangles) goes to zero (**Δx** →** dx**), we get the integral of the function:

*Quod erat demonstrandum*

**FTOC-1**

The area bounded by a smooth curve **f(x)**, the x-axis, and the lines **x=a** and **x=b** is just the difference between the antiderivative of **f(x)** evaluated at points **a** and **b**.

It's worth thinking about the first fundamental theorem of calculus one more time. It says the the integral representing the area between a function and the x-axis, an infinite sum of infinitely narrow rectangles, can be reduced to a simple difference of an antiderivative taken at the endpoints of the domain of integration [a, b].

The last line of the box expresses three layers of functions and operations. Inside is the function **f(t)**, a 1:1 function that assigns one y-coordinate to every value of **t**. That is inside of an integral function, with independent variable **x**. That integral function calculates an area below **f(t)** between limits **a** and **x**. Finally, all of that is inside of a derivative.

Another way to look at it is that we've invented a new kind of function, **G(x)**, an **integral-defined function** with its independent variable as one of the limits. It's an *area-accumulation function*: As **x** grows, the amount of area under the curve increases.

Use the slider on the graph below to get an idea for how the definite integral works as an **area accumulation function.** The independent variable is the top limit in the definite integral, so the integral *itself* is a function of the independent variable **x**. The slider changes the value of **x**. Notice also that the area under the curve accumulates, then diminishes as the curve dips below the x-axis (negative area).

Now if we focus on the area between **x** and **x + h**, we can express that area two different ways:

The area is approximately equal to **f(x) · h**,

Now dividing by **h** gives us an expression on the left that looks like the derivative:

If we take the limit as **h →0**, we see that the derivative of the area function is just **f(x)**:

If you imagine moving our vertical line along the independent variable **x**, sweeping out area under the curve, that the total area would oscillate as we add negative and positive areas. It's not a stretch to see how the purple curve could be a graph of that area as a function of **x**. The purple graph is the integral-defined function. It's actually a pretty important function in the field of optics, and it's called the **Fresnel** (pronounced fruh · nel') function.

If we integrate (note that the lower limit is zero), then take the derivative of the result, after evaluating the limits, we get:

The graph is shown below, and the full integral is worked out. When the limits are evaluated, the value of the integral is **x ^{2} - a^{2}**.

Now we take the derivative with respect to **x** and the derivative of the lower limit, just the constant **a ^{2}**, is zero. The lower limit doesn't matter.

So the **FTOC-2** is pretty weird. Notice that when taking the derivative of such an integral, we don't actually *need* to integrate. We just replace **t** in the integrand by **x**, and that's it. Couldn't be simpler.

The **FTOC-2** posits that:

So we need to prove that **G'(x)**, as defined, is equal to **f(x)**. To do so, we define two antiderivatives, **G(x)** and **G(z)** according to FTOC-2:

Now we're going to work toward a merging of the average value of an integral with the definition of a derivative, so the next step is to take the difference between **G(z)** and **G(x)**, and we'll assume that **z > x**.

Now the average value of that integral is just the sum of all the **f(t)**'s over the interval, divided by the interval itself, **(z - x)**. We'll name that average **f(c)** (with no particular meaning intended for the letter 'c')

Now here's the crux: There's another way to calculate that same average. It's just the change in rise of the antiderivatives over the change in the independent variable **t**. It is:

This looks like a derivative; it's just lacking the limit as **x → z** to give **G'**. Recall that we're trying to show that **G'(x) = f(x)**. If we take that limit on both expressions for the average of the integral, we end up "squeezing" **f(c)** between **x** and **z**. After all, the average will always lie between the two extremes. At the limit where **x = z**, **f(c) = f(x), **and we've proved our theorem.

With **FTOC-2** proved, we can use that result to prove **FTOC-1**, which says:

Now we've proved that **G(x)** is antiderivative of **f(x)**,

so **F(x)**, postulated to be an antiderivative of **f(x)**, must be equal to **G(x)** to within an additive constant:

Then we can simply write

and we have proved the **FTOC-1**.

Let's first find the integral in the straightforward way, using the power rule of integration and evaluating the limits:

Now the derivative of the integral is:

which is just the integrand of our original integral, with **t** replaced by **x**. And that will be the case in all such problems. All together it looks like this:

Now one thing you might be wondering about is the lower limit of integration, **x=0**. Let's repeat this problem, except this time with a finite lower limit; let's call it **a**.

Do the integral in the same way, except now we get the answer above with a constant (**-a ^{3}/3**) added to it:

Now if we take the derivative, it's the same because the second term is constant. What we find is that the lower limit just doesn't matter in this kind of expression of **FTOC-2**.

Putting it all together, the statement is:

So you see, in these problems, there's no need to integrate at all. It only becomes more complicated when that **x** in the upper limit is a *function of x*, so that we'll need some kind of

**For many FTOC-2 problems, the solution is deceptively obvious:**

Now instead of just having an independent variable as the upper limit of integration, we have a function of that variable — it's like a chain rule problem in differentiation. Think of it like this: If

then

Now using the chain rule of differentiation, the derivative of **F(x ^{2})** is the derivative of the outer function

Now we can just plug in the solution:

You can read a lot more about this function in the section on probability distributions. What's important about it for our purpose here is the area under the curve (which is symmetric across the line **x=0**). The area between the limits -∞ and ∞ should equal one because it represents the total probability of an event happening at all, and we often include other factors to "normalize" it, or to force the total area under the curve to be 1. The ratio of any lesser area, like the one between ±a in the plot below, to that total is equal to the probability of an event occuring.

This integral can't be done analytically (with paper and pencil) – it has to be done by numerical methods, but we can still easily find its first and second derivatives through FTOC-2, and thus plot the function very well.

Another important curve defined by an integral function is the Fresnel function (fruh · nel'), graphed here on the right. We also considered it above. This function is very important in certain kinds of optics applications.

The Fresnel *cosine* function is also used frequently, depending on the situation.

**xaktly.com** by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to jeff.cruzan@verizon.net.