# Math Insight

The matrix of partial derivatives of a scalar-valued function, $f: \R^n \to \R$ (confused?), is a $1 \times n$ row matrix: \begin{gather*} \jacm f(\vc{x}) = \left[ \pdiff{f}{x_1}(\vc{x}) \quad \pdiff{f}{x_2}(\vc{x}) \quad \cdots \quad \pdiff{f}{x_n}(\vc{x}) \right]. \end{gather*} Normally, we don't view a vector as such a row matrix. When we write vectors as matrices, we tend to write an $n$-dimensional vector vector as $n \times 1$ column matrix. But, in this case, we'll make an exception, and view this derivative matrix as a vector, called the gradient of $f$ and denoted as $\nabla f$: $$\nabla f(\vc{x}) = \left(\pdiff{f}{x_1}(\vc{x}), \pdiff{f}{x_2}(\vc{x}), \cdots, \pdiff{f}{x_n}(\vc{x}) \right).$$

We emphasize that the gradient vector is defined only for scalar-valued functions. For vector valued functions, we will stick with viewing the derivative as a matrix.

#### Why view the derivative as a vector?

Viewing the derivative as the gradient vector is useful in a number of contexts. The geometric view of the derivative as a vector with a length and direction helps in understading the properties of the directional derivative.

In another context, we can think of the gradient as a function $\nabla f: \R^n \to \R^n$, which can be viewed as a special type of vector field. When one takes line integrals of this vector field, one discovers the line integral is path-independent, the central idea of the gradient theorem for line integrals.