$\require{mathrsfs}\require{cancel}\newcommand{\dee}[1]{\mathrm{d}#1} \newcommand{\half}{ \frac{1}{2} } \newcommand{\ds}{\displaystyle} \newcommand{\ts}{\textstyle} \newcommand{\es}{ {\varnothing}} \newcommand{\st}{ {\mbox{ s.t. }} } \newcommand{\pow}[1]{ \mathcal{P}\left(#1\right) } \newcommand{\set}[1]{ \left\{#1\right\} } \newcommand{\lin}{{\text{LIN}}} \newcommand{\quot}{{\text{QR}}} \newcommand{\simp}{{\text{SMP}}} \newcommand{\diff}[2]{ \frac{\mathrm{d}#1}{\mathrm{d}#2}} \newcommand{\bdiff}[2]{ \frac{\mathrm{d}}{\mathrm{d}#2} \left( #1 \right)} \newcommand{\ddiff}[3]{ \frac{\mathrm{d}^#1#2}{\mathrm{d}{#3}^#1}} \renewcommand{\neg}{ {\sim} } \newcommand{\limp}{ {\;\Rightarrow\;} } \newcommand{\nimp}{ {\;\not\Rightarrow\;} } \newcommand{\liff}{ {\;\Leftrightarrow\;} } \newcommand{\niff}{ {\;\not\Leftrightarrow\;} } \newcommand{\De}{\Delta} \newcommand{\bbbn}{\mathbb{N}} \newcommand{\bbbr}{\mathbb{R}} \newcommand{\bbbp}{\mathbb{P}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cV}{\mathcal{V}} \newcommand{\Si}{\Sigma} \newcommand{\arccsc}{\mathop{\mathrm{arccsc}}} \newcommand{\arcsec}{\mathop{\mathrm{arcsec}}} \newcommand{\arccot}{\mathop{\mathrm{arccot}}} \newcommand{\erf}{\mathop{\mathrm{erf}}} \newcommand{\smsum}{\mathop{{\ts \sum}}} \newcommand{\atp}[2]{ \genfrac{}{}{0in}{}{#1}{#2} } \newcommand{\ave}{\mathrm{ave}} \newcommand{\llt}{\left \lt } \newcommand{\rgt}{\right \gt } \newcommand{\YEaxis}[2]{\draw[help lines] (-#1,0)--(#1,0) node[right]{x};\draw[help lines] (0,-#2)--(0,#2) node[above]{y};} \newcommand{\YEaaxis}[4]{\draw[help lines] (-#1,0)--(#2,0) node[right]{x};\draw[help lines] (0,-#3)--(0,#4) node[above]{y};} \newcommand{\YEtaxis}[4]{\draw[help lines] (-#1,0)--(#2,0) node[right]{t};\draw[help lines] (0,-#3)--(0,#4) node[above]{y};} \newcommand{\YEtaaxis}[4]{\draw[help lines, <->] (-#1,0)--(#2,0) node[right]{t}; \draw[help lines, <->] (0,-#3)--(0,#4) node[above]{y};} \newcommand{\YExcoord}[2]{\draw (#1,.2)--(#1,-.2) node[below]{#2};} \newcommand{\YEycoord}[2]{\draw (.2,#1)--(-.2,#1) node[left]{#2};} \newcommand{\YEnxcoord}[2]{\draw (#1,-.2)--(#1,.2) node[above]{#2};} \newcommand{\YEnycoord}[2]{\draw (-.2,#1)--(.2,#1) node[right]{#2};} \newcommand{\YEstickfig}[3]{ \draw (#1,#2) arc(-90:270:2mm); \draw (#1,#2)--(#1,#2-.5) (#1-.25,#2-.75)--(#1,#2-.5)--(#1+.25,#2-.75) (#1-.2,#2-.2)--(#1+.2,#2-.2);} \newcommand{\IBP}[7]{ \begin{array}{|c | l | l |} \hline \color{red}{\text{Option 1:}} & u=#2 &\color{red}{\dee{u}=#3 ~ \dee{#1}} \\ & \dee{v}=#5~\dee{#1} &\color{red}{v=#7} \\ \hline \color{blue}{\text{Option 2:}} & u=#5 &\color{blue}{\dee{u}=#6 ~ \dee{#1}} \\ &\dee{v}=#2 \dee{#1} &\color{blue}{v=#4} \\ \hline \end{array} } \renewcommand{\textcolor}[2]{{\color{#1}{#2}}} \newcommand{\trigtri}[4]{ \begin{tikzpicture} \draw (-.5,0)--(2,0)--(2,1.5)--cycle; \draw (1.8,0) |- (2,.2); \draw[double] (0,0) arc(0:30:.5cm); \draw (0,.2) node[right]{#1}; \draw (1,-.5) node{#2}; \draw (2,.75) node[right]{#3}; \draw (.6,1.1) node[rotate=30]{#4}; \end{tikzpicture}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&}$

## Section2.2Averages

Another frequent  1 Awful pun. The two main approaches to statistics are frequentism and Bayesianism; the latter named after Bayes' Theorem which is, in turn, named for Reverand Thomas Bayes. While this (both the approaches to statistics and their history and naming) is a very interesting and quite philosophical topic, it is beyond the scope of this course. The interested reader has plenty of interesting reading here to interest them. application of integration is computing averages and other statistical quantities. We will not spend too much time on this topic — that is best left to a proper course in statistics — however, we will demonstrate the application of integration to the problem of computing averages.

Let us start with the definition  2 We are being a little loose here with the distinction between mean and average. To be much more pedantic — the average is the arithmetic mean. Other interesting “means” are the geometric and harmonic means: \begin{align*} \text{arithmetic mean} &= \frac{1}{n}\left( y_1 + y_2 + \cdots + y_n \right)\\ \text{geometric mean} &= \left( y_1 \cdot y_2 \cdots y_n \right)^{\frac{1}{n}}\\ \text{harmonic mean} &= \left[\frac{1}{n}\left( \frac{1}{y_1} + \frac{1}{y_2} + \cdots \frac{1}{y_n} \right)\right]^{-1} \end{align*} All of these quantities, along with the median and mode, are ways to measure the typical value of a set of numbers. They all have advantages and disadvantages — another interesting topic beyond the scope of this course, but plenty of fodder for the interested reader and their favourite search engine. But let us put pedantry (and beyond-the-scope-of-the-course-reading) aside and just use the terms average and mean interchangably for our purposes here. of the average of a finite set of numbers.

###### Definition2.2.1

The average (mean) of a set of $n$ numbers $y_1\text{,}$ $y_2\text{,}$ $\cdots\text{,}$ $y_n$ is

\begin{gather*} y_\ave =\bar y = \llt y\rgt =\frac{y_1+y_2+\cdots+y_n}{n} \end{gather*}

The notations $y_\ave\text{,}$ $\bar y$ and $\llt y\rgt$ are all commonly used to represent the average.

Now suppose that we want to take the average of a function $f(x)$ with $x$ running continuously from $a$ to $b\text{.}$ How do we even define what that means? A natural approach is to

• select, for each natural number $n\text{,}$ a sample of $n\text{,}$ more or less uniformly distributed, values of $x$ between $a$ and $b\text{,}$
• take the average of the values of $f$ at the selected points,
• and then take the limit as $n$ tends to infinity.

Unsurprisingly, this process looks very much like how we computed areas and volumes previously. So let's get to it.

• First fix any natural number $n\text{.}$
• Subdivide the interval $a\le x\le b$ into $n$ equal subintervals, each of width $\De x=\frac{b-a}{n}\text{.}$
• The subinterval number $i$ runs from $x_{i-1}$ to $x_i$ with $x_i=a+i\frac{b-a}{n}\text{.}$
• Select, for each $1\le i\le n\text{,}$ one value of $x$ from subinterval number $i$ and call it $x_i^*\text{.}$ So $x_{i-1}\le x_i^*\le x_i\text{.}$
• The average value of $f$ at the selected points is \begin{align*} \frac{1}{n}\sum_{i=1}^n f(x_i^*) =&\frac{1}{b-a}\sum_{i=1}^n f(x_i^*) \De x &\text{since $\De x=\frac{b-a}{n}$} \end{align*} giving us a Riemann sum.

Now when we take the limit $n\rightarrow\infty$ we get exactly $\frac{1}{b-a}\int_a^b f(x)\dee{x}\text{.}$ That's why we define

###### Definition2.2.2

Let $f(x)$ be an integrable function defined on the interval $a\le x\le b\text{.}$ The average value of $f$ on that interval is

\begin{gather*} f_\ave=\bar f=\llt f\rgt =\frac{1}{b-a}\int_a^b f(x)\dee{x} \end{gather*}

Consider the case when $f(x)$ is positive. Then rewriting Definition 2.2.2 as

$f_\ave\ (b-a) = \int_a^b f(x)\dee{x}$

gives us a link between the average value and the area under the curve. The right-hand side is the area of the region

\begin{gather*} \big\{(x,y)\ \big|\ a\le x\le b,\ 0\le y\le f(x)\ \big\} \end{gather*}

while the left-hand side can be seen as the area of a rectangle of width $b-a$ and height $f_\ave\text{.}$ Since these areas must be the same, we interpret $f_\ave$ as the height of the rectangle which has the same width and the same area as $\big\{(x,y)\ \big|\ a\le x\le b,\ 0\le y\le f(x)\ \big\}\text{.}$

Let us start with a couple of simple examples and then work our way up to harder ones.

Let $f(x)= x$ and $g(x)=x^2$ and compute their average values over $1 \leq x\leq 5\text{.}$

Solution: We can just plug things into the definition.

\begin{align*} f_\ave &= \frac{1}{5-1}\int_1^5 x \dee{x} \\ &= \frac{1}{4} \bigg[ \frac{x^2}{2} \bigg]_1^5 \\ &= \frac{1}{8} (25-1) = \frac{24}{8} \\ &= 3 \end{align*}

as we might expect. And then

\begin{align*} g_\ave &= \frac{1}{5-1}\int_1^5 x^2 \dee{x} \\ &= \frac{1}{4} \bigg[ \frac{x^3}{3} \bigg]_1^5 \\ &= \frac{1}{12} (125-1) = \frac{124}{12} \\ &= 10\frac{1}{3} \end{align*}

Something a little more trigonometric

Find the average value of $\sin(x)$ over $0 \leq x \leq \frac{\pi}{2}\text{.}$

Solution: Again, we just need the definition.

\begin{align*} \text{average} &= \frac{1}{\frac{\pi}{2} - 0} \int_0^{\frac{\pi}{2}} \sin(x) \dee{x} \\ &= \frac{2}{\pi} \cdot \bigg[ -\cos(x) \bigg]_0^{\frac{\pi}{2}} \\ &= \frac{2}{\pi} (-\cos(\frac{\pi}{2})+\cos(0)) \\ &= \frac{2}{\pi}. \end{align*}

We could keep going… But better to do some more substantial examples.

Let $x(t)$ be the position at time $t$ of a car moving along the $x$-axis. The velocity of the car at time $t$ is the derivative $v(t)=x'(t)\text{.}$ The average velocity of the car over the time interval $a\le t\le b$ is

\begin{align*} v_\ave &= \frac{1}{b-a}\int_a^b v(t)\dee{t}\\ &=\frac{1}{b-a}\int_a^b x'(t)\dee{t}\\ &=\frac{x(b)-x(a)}{b-a} & \text{by the fundamental theorem of calculus.} \end{align*}

The numerator in this formula is just the displacement (net distance travelled — if $x'(t)\ge 0\text{,}$ it's the distance travelled) between time $a$ and time $b$ and the denominator is just the time it took.

Notice that this is exactly the formula we used way back at the start of your differential calculus class to help introduce the idea of the derivative. Of course this is a very circuitous way to get to this formula — but it is reassuring that we get the same answer.

A very physics example.

When you plug a light bulb into a socket  3 A normal household socket delivers alternating current, rather than the direct current USB supplies. At the risk of yet another “the interested reader” suggestion — the how and why household plugs supply AC current is another worthwhile and interesting digression from studying integration. The interested reader should look up the “War of Currents”. The diligent and interested reader should bookmark this, finish the section and come back to it later. and turn it on, it is subjected to a voltage

\begin{align*} V(t) &= V_0\sin(\omega t-\delta) \end{align*}

where

• $V_0=170$ volts,
• $\omega=2\pi\times 60$ (which corresponds to $60$ cycles per second  4 Some countries supply power at 50 cycles per second. Japan actually supplies both — 50 cycles in the east of the country and 60 in the west.) and
• the constant $\delta$ is an (unimportant) phase. It just shifts the time at which the voltage is zero

The voltage $V_0$ is the “peak voltage” — the maximum value the voltage takes over time. More typically we quote the “root mean square” voltage  5 This example was written in North America where the standard voltage supplied to homes is 120 volts. Most of the rest of the world supplies homes with 240 volts. The main reason for this difference is the development of the light bulb. The USA electrified earlier when the best voltage for bulb technology was 110 volts. As time went on, bulb technology improved and countries that electrified later took advantage of this (and the cheaper transmission costs that come with higher voltage) and standardised at 240 volts. So many digressions in this section! (or RMS-voltage). In this example we explain the difference, but to simplify the calculations, let us simplify the voltage function and just use

\begin{align*} V(t) &= V_0 \sin(t) \end{align*}

Since the voltage is a sine-function, it takes both positive and negative values. If we take its simple average over 1 period then we get

\begin{align*} V_\ave &= \frac{1}{2\pi-0} \int_0^{2\pi} V_0 \sin(t) \dee{t}\\ &= \frac{V_0}{2\pi}\bigg[ - \cos(t)\bigg]_0^{2\pi} \\ &= \frac{V_0}{2\pi}\left( -\cos(2\pi) + \cos 0\right) = \frac{V_0}{2\pi}(-1+1)\\ &= 0 \end{align*}

This is clearly not a good indication of the typical voltage.

What we actually want here is a measure of how far the voltage is from zero. Now we could do this by taking the average of $|V(t)|\text{,}$ but this is a little harder to work with. Instead we take the average of the square  6 For a finite set of numbers one can compute the “quadratic mean” which is another way to generalise the notion of the average: \begin{equation*} \text{quadratic mean} = \sqrt{\frac{1}{n}\left(y_1^2 + y_2^2 + \cdots + y_n^2 \right) } \end{equation*} of the voltage (so it is always positive) and then take the square root at the end. That is

\begin{align*} V_\mathrm{rms} &= \sqrt{\frac{1}{2\pi-0} \int_0^{2\pi} V(t)^2 \dee{t}}\\ &= \sqrt{\frac{1}{2\pi} \int_0^{2\pi} V_0^2 \sin^2(t) \dee{t}}\\ &= \sqrt{\frac{V_0^2}{2\pi} \int_0^{2\pi} \sin^2(t) \dee{t}} \end{align*}

This is called the “root mean square” voltage.

Though we do know how to integrate sine and cosine, we don't (yet) know how to integrate their squares. A quick look at double-angle formulas  7 A quick glance at Appendix A.14 will refresh your memory. gives us a way to eliminate the square:

\begin{gather*} \cos(2\theta) =1-2\sin^2\theta \implies \sin^2\theta=\frac{1-\cos(2\theta)}{2} \end{gather*}

Using this we manipulate our integrand a little more:

\begin{align*} V_\mathrm{rms} &= \sqrt{\frac{V_0^2}{2\pi} \int_0^{2\pi} \frac{1}{2}(1-\cos(2t)) \dee{t}}\\ &= \sqrt{ \frac{V_0^2}{4\pi} \bigg[t - \frac{1}{2}\sin(2t) \bigg]_0^{2\pi} }\\ &= \sqrt{ \frac{V_0^2}{4\pi} \left(2\pi - \frac{1}{2}\sin(4\pi) - 0 + \frac{1}{2}\sin(0) \right) }\\ &= \sqrt{ \frac{V_0^2}{4\pi} \cdot 2\pi }\\ &= \frac{V_0}{\sqrt{2}} \end{align*}

So if the peak voltage is 170 volts then the RMS voltage is $\frac{170}{\sqrt{2}}\approx 120.2\text{.}$

Continuing this very physics example:

Let us take our same light bulb with voltage (after it is plugged in) given by

\begin{align*} V(t) &= V_0\sin(\omega t-\delta) \end{align*}

where

• $V_0$ is the peak voltage,
• $\omega=2\pi\times 60\text{,}$ and
• the constant $\delta$ is an (unimportant) phase.

If the light bulb is “100 watts”, then what is its resistance?

To answer this question we need the following facts from physics.

• If the light bulb has resistance $R$ ohms, this causes, by Ohm's law, a current of \begin{align*} I(t) &= \frac{1}{R} V(t) & \end{align*} (amps) to flow through the light bulb.
• The current $I$ is the number of units of charge moving through the bulb per unit time.
• The voltage is the energy required to move one unit of charge through the bulb.
• The power is the energy used by the bulb per unit time and is measured in watts.

So the power is the product of the current times the voltage and, so

\begin{equation*} P(t)=I(t)V(t) =\frac{V(t)^2}{R} =\frac{V_0^2}{R}\sin^2(\omega t-\delta) \end{equation*}

The average power used over the time interval $a\le t\le b$ is

\begin{align*} P_\ave &= \frac{1}{b-a}\int_a^b P(t)\dee{t} = \frac{V_0^2}{R(b-a)}\int_a^b \sin^2(\omega t-\delta)\dee{t} \end{align*}

Notice that this is almost exactly the form we had in the previous example when computing the root mean square voltage.

Again we simplify the integrand using the identity

\begin{equation*} \cos(2\theta) =1-2\sin^2\theta \implies \sin^2\theta=\frac{1-\cos(2\theta)}{2} \end{equation*}

So

\begin{align*} P_\ave &= \frac{1}{b-a}\int_a^b P(t)\dee{t} = \frac{V_0^2}{2R(b-a)}\int_a^b \big[1-\cos(2\omega t-2\delta)\big]\dee{t} \\ &=\frac{V_0^2}{2R(b-a)}\bigg[t-\frac{\sin(2\omega t-2\delta)}{2\omega}\bigg]_a^b \\ &=\frac{V_0^2}{2R(b-a)}\bigg[b-a-\frac{\sin(2\omega b-2\delta)}{2\omega} +\frac{\sin(2\omega a-2\delta)}{2\omega}\bigg]\\ &=\frac{V_0^2}{2R} -\frac{V_0^2}{4\omega R(b-a)}\big[\sin(2\omega b-2\delta)-\sin(2\omega a-2\delta)\big] \end{align*}

In the limit as the length of the time interval $b-a$ tends to infinity, this converges to $\frac{V_0^2}{2R}\text{.}$ The resistance $R$ of a “100 watt bulb” obeys

\begin{align*} \frac{V_0^2}{2R} &=100 & \text{so that} && R &= \frac{V_0^2}{200}. \end{align*}

We finish this example off with two side remarks.

• If we translate the peak voltage to the root mean square voltage using \begin{align*} V_0 &= V_\mathrm{rms} \cdot \sqrt{2} \end{align*} then we have \begin{align*} P &= \frac{V^2_{\mathrm{rms}}}{R} \end{align*}
• If we were using direct voltage rather than alternating current then the computation is much simpler. The voltage and current are constants, so \begin{align*} P &= V \cdot I & \text{but $I = V/R$ by Ohm's law} \\ &= \frac{V^2}{R} \end{align*} So if we have a direct current giving voltage equal to the root mean square voltage, then we would expend the same power.