Somebody mentioned a "Jacobian" at work - which sounded exotic and mathy and attractive. I resolved to watch the Khan Academy videos on the Jacobian and learn, dammit, LEARNNNN!

til tldr: You can use the Jacobian matrix, and its determinant, to figure out locally linear transformations to small ranges around points, when they've been all warped up by a non-linear transformation. Apparently this is useful for non-linear least squares regression. I'm passively looking for more real world examples.


Linear algebra is linear

I've been working my way through Khan Academy's linear algebra series which is (a) very good and (b) very long (it's been months!). I took linear algebra long ago as an undergrad of yore, and had fuzzy memories of (a) how matrices can be multiplied a bunch of different ways (dot? inner? cross?), (b) eigenvalues are important, and (c) ORTHOGONAL. I remember enjoying it, but I was and am very rusty.

When I got into data science, I loved learning about principal component analysis and, of course, deep learning, but I also realized, the time for a linear algebra refresher was nigh upon me. I highly recommend Khan for these things.

I'm about halfway through, and the biggest takeaway is the elegance and omnipresence and convenience of linear transformation matrices. And what you can do with them! And how they can encode so much information! Also, the frisson of linear independence - i.e. more information.

I think it's valuable/helpful to visualize the geometry of this stuff, so here's some great visualizations (Khan again):

-- an example of a linear transformation

-- an example of a non-linear transformation, a SQUIGGLE FEST, a science fiction opus waiting to be written

A case of the squiggles: What to do with non-linear transformations

So most of the Khan linear algebra I've been doing has been watching Sal Khan work through proofs of all the conveniences and glories of having everything linear. And just generally demonstrating how these two simple rules:

$$T(\bar{v} + \bar{w}) = T(\bar{v}) + T(\bar{w})$$
$$T(\alpha \bar{v}) = \alpha T(\bar{v})$$

for any vectors, \(\bar{v}\) and \(\bar{w}\), any scalar \(\alpha\) and any matrix transformation \(T\), can lead to a lot of helpful results.

But what about non-linear transformations? Those would fail the two rules above, and a lot of linear algebra's stuff would break down. It's been likewise YEARS since I did any calculus, so it was fun and interesting to learn - in the Jacobian videos - about how multivariable calculus tries to port the insights of linear algebra onto non-linear problems.

Specifically, the Jacobian matrix is based on the premise that, for local ranges around a point, things stay pretty linear even in the most abominable of squiggle-fests. Or, as Wiki explains,

"The Jacobian matrix is important because if the function \(f\) is differentiable at a point \(x\) (this is a slightly stronger condition than merely requiring that all partial derivatives exist there), then the Jacobian matrix defines a linear map \(\mathbb{R}^{n} \rightarrow \mathbb{R}^{m}\), which is the best (pointwise) linear approximation of the function \(f\) near the point \(x\). This linear map is thus the generalization of the usual notion of derivative, and is called the derivative or the differential of \(f\) at \(x\)."

Visually:

non-linear generally, linear locally

-- screenshot from Khan Academy: Local linearity for a multivariable function

And in plainer English (explaining it to myself): the insight of local linearity lets us do a nice trick: by taking the partial derivatives of the original (non-linear) transformation for the \(x\) and \(y\) directions (in the 2D example), we can get a sense of how points would be transformed by a non-linear transformation, \(f\), in a local space.

Even more cool was that, in a 2D space, the determinant of that transformation (when written as the transformed basis vector) was the scaled change in that area! WOW! See here and here. So, when you take the determinant of the Jacobian, you get a sense of how that non-linear transformation changed the area of stuff around that local space. If something was of size 1 before, and you calculate the Jacobian determinant after a non-linear transformation, you can say, "Ah ha, now it's size 0.5!" Or whatever. And this determinant is different, of course, depending on which point in the space you choose to transform! Because it's non-linear!

Perspective in art

So yesterday I doodled Oxford's Radcliffe Camera --

rad cam

-- and was VERY irritated with my total perspective fail.

And, as I was watching the linear algebra videos, and Sal Khan was mentioning the application to 3D game development, I realized: VANISHING POINTS!

This led me, so happily, to the power and glory of Italian Renaissance genius - that most inspiring of geniusnesses. It was the Italian Renaissance masters - Brunelleschi, specifically, who built Florence's magnificent cathedral - that realized how perspective was a mathematical and geometric property. So glorious! So reliable! I got really pumped about this, and about how linear transformations could be applied to art.

Jacobian, not Jacobin

Har har, well, I found it funny. No, it's not that Jacobin. But oooh - an article about Italy!

TODO

  • Write a script to calculate the per-video and total duration of various Khan Academy courses. (Honestly, this should just be on the website!)
  • Apply math to art!?