Friday, November 1, 2013

The theoretical minimum of physics


The theoretical minimum.

What you need to know to start doing physics

by Susskind and Hrabovsky, 2013

This crash course of classical mechanics is targeted at those who “regretted not taking physics at university” or who perhaps did but have forgotten most of it, or anyone who is just curious and wants to learn how to think like a physicist. Since first year university physics courses usually have rather high drop out frequencies, there must be some genuine difficulties to come over. Instead of dwelling on the mind-boggling paradoxes of quantum mechanics and relativity as most popular physics books do, and wrapping it up in fluffy metaphores and allusions to eastern philosophy, the theoretical minimum offers a glimpse of the actual calculations and their theoretical underpinnings in classical mechanics.

This two hundred page book grew out of a series of lectures given by Susskind, but adds a series of mathematical interludes that serve as refreshers on calculus. Although covering almost exactly the same material as the book, the lectures are a good complement. Some explanations may be more clear in the classroom, often prompted by questions from the audience. Although Susskind is accompanied by Hrabovsky as a second author, the text mysteriously addresses the reader in the first person singular.

A typical first semester physics text book may cover less theory than the theoretical minimum in a thousand pages volume, although it would probably cover relativity theory which is not discussed in this book. There are a few well chosen exercises in the theoretical minimum, some quite easy and a few that take some time to solve. “You can be dumb as hell and still solve the problem”, as Susskind puts it in one of the lectures while discussing the Lagrangian formulation of mechanics versus Newton's equations. That quote fits as a description of the exercises too, as many of them can be solved without really gaining a solid understanding of how it all works.

The book begins by introducing the concept of conservation of information and how it applies to deterministic, nonreversible systems (all systems considered in classical mechanics are deterministic and nonreversible). Halfways through the book the first more advanced ideas come into play: the Lagrangian and the principle of least action. In general, one gets an idea of what kinds of questions physicists care about, such as symmetries and conservation laws. Examples of symmetries that are discussed include spatial translation invariance and time shift invariance, and the conservation of energy is a recurrent theme. The trick is simple: take a time derivative of the Lagrangian or the Hamiltonian, and show it to be zero. The principle of least action requires more sophisticated mathematics (functional analysis), although the authors try to explain it in very simple terms. Nonetheless, that part is not very easy to follow.

The writing is concise, yet almost colloquial, with only a few typos. Mathematical rigour is thrown out whenever it would clutter the exposition. Susskind does not care for limits in the formulation of derivaties, but uses a delta or an epsilon that is supposedly infinitesimal in a loosely nonstandard analysis kind of way. Most derivations are easy to follow, using elementary calculus and patiently laid out step by step. Some background in one variable and vector calculus will be necessary to follow the text, although all math that is needed (which is not very much) is summarized in the mathematical interludes.

Why should we need to know about Lagrangians, Hamiltonians and Poisson brackets, a student may ask. Susskind's answer might be that Lagrangians make the solution of certain problems much easier than trying to apply Newton's equations, and that Hamiltonians play an important role in quantum mechanics.

The theoretical minimum is probably the most concise introduction to advanced physics out there, highly suitable for self-study. It provides much of the essential background needed for books reviewed here in previous posts, such as Steeb's Nonlinear Workbook or Haken's Synergetics.

Monday, October 28, 2013

Fractal herbarium

This flower was raised sometime in 2001 or 2002. You probably know where it comes from.


Ditto, but who knows what causes its distorted shape?


And finally something more regular. A sketch from the period when I was making my first animation, Leçons de Pythagore, back in 2002. Nothing like it appears in the video though.




Wednesday, September 25, 2013

How to patch your own oscillator

The charming world of analog modular synthesis offers many choices regarding how to construct one's instrument from components. There are lots of oscillators, filters, VCAs, LFOs, signal processors and utility modules to choose among. In that setting, it can be very interesting to build something as elementary as an oscillator out of even more basic components. Here is an example of how it can be done with two modules, neither of which functions as an oscillator on its own.

The modules needed are a utility module that mixes, offsets and inverts signals, and a dual slew limiter (or two separate slew limiters). In particular, this example will work with Doepfer's Slew Limiter A-170 SL and wmd's Invert Offset mk II. However, there is nothing magic about these modules, so other modules that offer equivalent functionality may replace them.


Five patch cords are needed to connect the modules as illustrated. Then, with some tweaking of the knobs, slow oscillations should occur. It is possible to influence the frequency by the settings of all the knobs. By adjusting the two lower knobs of A-170, controlling the rise and fall times, the wave shape can also be varied from rising ramp through triangle to falling ramp. The amplitude may be low, and the frequency usually sub-audio, although low bass frequencies in the audio range can be obtained. The effects are best observed if the CV out of the Invert Offset is routed to the frequency input of another oscillator.

What is actually going on in this patch? To a first approximation, the slew limiter can be regarded as an integrator. In fact, it is probably more accurate to think of it as a leaky integrator. The Invert Offset consists of two identical blocks with two signal inputs and two outputs each. Let us introduce the labels x+, x-, y+ and y- for the output signals, and ux, uy, vx and vy for the inputs, as shown in the sketch above. The knobs, labeled cx and cy, add a constant offset to the signal. Inferring from the user's manual, the following set of equations should describe what the module does.
Expressing the action of the slew limiter as an integral, and following the patch cords that go into the inputs of the Invert Offset module, the system is given by:
After a number of substitutions, and taking derivatives to get rid of the integrals, the system simplifies to:
If the constants are both zero, the eigenvalues of this system are 1±i, indicating that the system is unstable. Clearly something in the model is wrong, since the actual patch does not blow up in any way. As hinted at earlier, the slew limiters do not actually integrate the signal. If they did, there would be infinite gain at dc so any constant signal fed into one of them would keep increasing linearly. What happens in reality is that, starting from a relaxed state and feeding a constant signal into a slew limiter, the output grows from zero until it reaches the level of the input. If one had two true integrators and an inverter, the equations for an harmonic oscillator
could be realized quite easily. 

The moral of this failed attempt at modeling two quite simple modules is that even seemingly simple modules may hide more complex behaviour than one would naively suspect. In any case, it may be surprising to find that five patch cords connecting these modules in the right way are all it takes to turn them into a low frequency oscillator. Although there are more than one way to patch up an oscillator from these two modules, there are many more ways to patch up systems that do not oscillate. Bistable systems with hysteresis is the result in most cases.


Sunday, September 8, 2013

And you don't even know how much we know about you

In terms of communication, we are all Americans. 

Meet Josef K.

Secret laws are applied in secret courts and the decisions are sent in secret letters to people who are not even allowed to talk about it. This much is widely known by now, despite the pervasive secrecy. And the lava of half-concealed revelations has just begun to flow.


The surveillance programs are not necessarily very effective and have lots of unwanted consequences, such as making people think twice before they send an email. (No, they have worse consequences in fact.) Bruce Schneier's analyses of the situation seem to be among the more accurate at the moment. As he points out, we are bad at estimating risks. Protection against extremely rare events with dire consequences (terrorism) is unduly prioritized whereas we care less about dangers that affect us on a much larger scale, such as car accidents. But this cognitive bias is not the only explanation of why we have this situation. It has to be good business for someone.

Although the news this summer has featured one or two brave whistle blowers, it is perhaps timely to recall the disgraceful fate suffered by Susan Lindauer. Her story is well worth a listen. Here it is: part one and part two.


Friday, August 9, 2013

Synergetics, the book


Hermann Haken: Synergetics. Introduction and Advanced Topics. 

[Disclaimer: There are many things in this book that I do not understand, although hopefully I have grasped the big picture.]

Under the term Synergetics, Haken collects a number of approaches that can be useful in a variety of scientific disciplines ranging from physics, chemistry and biology, to economics and even sociology. Synergetics is presented as its own discipline with its characteristic concepts and methods. Yet this discipline draws on related fields such as thermodynamics, statistical dynamics, information theory, dynamic systems, control theory, bifurcations and catastrophe theory. Synergetics proposes to shed light on self-organized phenomena in various areas and to treat them within a unified apparatus. In particular, the slaving principle is the one trick that is used again and again. The slaving principle can be thought of in terms of a dynamic system where some variables change fast and others slowly, but there is also a separation into stable and unstable modes. The stable modes can be eliminated and treated as parameters, resulting in great simplifications.

This tome contains two classic volumes in one. Volume one (Introduction) begins gently with tutorial chapters on basic probability theory, ordinary differential equations, and their combination in stochastic differential equations. After the theoretical background has been presented, there is a chapter on self-organization followed by several chapters devoted to applications in various domains. First, the chapter on physics deals mainly with lasers. Then, as the chapters turn to chemistry, biology and economics in turn, the treatment becomes more and more accessible to the non-specialist. However, at the same time the models seem to become increasingly simplistic. Already the examples from biology and population dynamics are sketchy, and the discussion of applications to economics and sociology do not introduce many useful ideas. Nonetheless, one should remember that Haken was among the pioneers who brought a physicist's tool kit to these fields. In particular,
[...] synergetics has established links between dynamic systems theory and statistical physics. Undoubtedly, the marriage between these two disciplines has started. (p. 364 of the double volume) 
Further, regarding the connections of physics, chemistry, biology and even softer sciences:
It thus appears that we are presently from two different sides digging a tunnel under a big mountain which has so far separated different disciplines, in particular the “soft” from the “hard” sciences. (p. 364-5) 
We see the results of this excavation in numerous papers today, where physicists have begun to address such problems as the motion of crowds at concerts or the opinion formation before elections. However, there are obvious dangers involved in attacking problems that lie far beyond one's sphere of specialization. In the words of Buckminster Fuller (who also wrote a two volume book called Synergetics, otherwise bearing little resemblance to Haken's):
The word generalization in literature usually means covering too much territory too thinly to be persuasive, let alone convincing. In science, however, a generalization means a principle that has been found to hold true in every special case.
Apparently both kinds of generalization are involved in Hakens work; the applicability seems to decrease the further away from physics one gets, till it begins to look suspicious when applied to the social sciences. Meanwhile, the single finding that unites all chapters, the slaving principle, exemplifies the kind of generalization that holds true in several special cases, if not in all conceivable scenarios. It is the method of finding solutions that survives generalizations, not necessarily so with the modelling of systems in different fields.

Volume two (Advanced Topics) starts over with a long expository chapter on the application domains followed by the introduction of the theory. There are short sections on deterministic chaos, but Haken is not the best source on this. Quasi-periodicity is treated extensively. Although the exposition is clear to begin with, soon enough matters get complicated. If you ever wondered what makes a system of differential equations with quasi-periodic coefficients stable or unstable, this is the text to read.

Matters of style

The first chapters of each volume are tutorial in character and cover material that most readers probably already know. The manner of exposition changes as Haken begins to introduce his own findings—one can sense a shifting of gears when his enthusiasm sets in. Unfortunately, these parts involve solutions that stretch over sections or entire chapters, sometimes using idiosyncratic notation. It is often hard to tell whether a variable is supposed to be real, complex, or a vector, even though one may be able to figure it out from the context.

The writing has the appearance of a stream of consciousness layed out at the blackboard, rather than elaborated at the typing machine. Throughout the book, variable substitutions are profusely employed; so much, in fact, that one almost inevitably loses track of the variables' meaning. The derivations are decidedly informal, with almost no theorems and proofs. (There are a handful of theorems that rely on a long list of assumptions with long, unwieldy proofs.) Instead there are long chains of “simplifications” or “abbreviations”, often resulting in expressions that are longer than the one they replace, truncations of higher order terms in series expansions and other sorts of approximations. All these tricks are of course what physicists are usually good at, but for readers without the proper background, they may appear as incomprehensible as pulling rabbits out of a hat.

If synergetics has to do with self-organization of complex systems, it must be said that Haken is quite terse on the topic of self-organization as such. This is where some conceptual analysis is lacking. On the other hand, the cyberneticians have already contributed much hand-waving philosophizing on self-organization, without necessarily having contributed much to its understanding. Here, at least, one has a class of problems and an approach to their solution, but there is more to self-organization than what is covered in this book.



Sunday, July 28, 2013

On smoothness under parameter changes

Is your synthesizer a mathematical function?

At least it can be considered in such terms. Each setting of all its parameters represents a point in parameter space. The output signal depends on the parameter settings. Assuming the parameters remain fixed over time, the generated audio signal may also be considered as a point in another space. In order to relate these output sequences to perceptually more relevant terms, signal descriptors (e.g. the fundamental frequency, amplitude, spectral centroid, flux) are applied to the output signal.





Now, in order to assess how smoothly the sound changes as one turns any of the knobs that controls some synthesis parameter, the first step is to relate the amount of change in the signal descriptors to the distance in parameter space. The distance in parameter space corresponds to the angle the knob is turned. Let us call this distance Δc. It is trickier to define suitable distance metrics in the space of audio signal sequences, but why not use a signal descriptor φ which itself varies over time and take its time average ⟨φ⟩. The difference Δφ between two such time averages as the synthesizer is run at two different points in parameter space may be taken as the distance metric.

A smooth function has derivatives of all orders. Therefore the smoothness of a synthesis parameter may be described in terms of a derivative of the function that maps points in parameter space to points in the space of signal descriptors. This derivative may be defined as the limit of Δφ/Δc as Δc approaches 0. It makes a significant difference whether a pitch control of an oscillator has been designed with a linear or exponential response. But abrupt changes, corresponding to a discontinuous derivative, will be even more conspicuous when they occur.

Whereas the derivative is about the smoothness locally at each point in parameter space, another way to look at parameter smoothness is to measure the total variation of a signal descriptor as the synthesis parameter goes from one setting to another. As a compromise, the interval over which the total variation is measured may be made really small, so that a local variation can be measured instead over an interval of a parameter.

Is this really useful for anything?

Short answer: Don't expect too much. But seriously, whether we like it or not, science progresses in part by taking vague concepts and making them crisper, by making them quantifiable. "Smoothness" under parameter changes is precisely such a vague concept that can be defined in ways that make it measurable. Such a smoothness diagnostic may be useful in the design of synthesis models and their parameter mappings, as well as perhaps for introducing and testing hypotheses about the perceptual discrimination of similar synthesized sounds.

The paper was presented as a poster at the joint SMAC/SMC conference.


Friday, June 7, 2013

Limited edition


When does it make sense to publish something on the internet, let's say a recording, as though it were a limited edition? A de facto limitation in the number of downloads is not impossible to achieve if the item is made to drown in the information deluge and then taken off-line as soon as it has been downloaded some specified number of times. Then of course one cannot guarantee that it will never be uploaded again, although that can be made a bit awkward by having very large files.

Does this strategy really work if the internet never forgets? (The Wayback Machine takes care of that, although they do not necessarily save all audio that floats around out there. Can we hope that some unnamed data centre in Utah or elsewhere stores it for us? Maybe, if you send it as an email attachment.) Usually people do their best to boost the number of views and fight hard for their page ranks. Doing the opposite clearly has its merits if the analog of a limited edition is the goal. Obviously, the idea of limited edition is tied to physical media, to something people can hold in their hands, so a digital file will not easily do as a replacement. The point of limited editions is exactly to make it clear that the resource is scarce, the object you are holding in your hands is a collectors item.

[announcement]

Thinking along these lines, I have published the two hour apocryphal piece Teem Work, which I intend to replace with something else before it has become too widespread. Although it can be downloaded, the following reasons speak against it.

  • The composition is merely a sketch. 
  • It will occupy more space on your hard disk than strictly necessary. 
  • It will steal two hours of your precious time. You'll never get them back.

It should be added that Teem Work features synthesis by cross-coupled feedback FM as described in a previous post.



Update (December 2015):

A remixed edition is now available as a digital album. Still it will steal two hours of your time, precious or not.

Tuesday, May 28, 2013

Feedback FM with bells and whistles

Single oscillator feedback FM is a most economic technique for producing rich harmonic tones. However, the technique suffers from parasitic oscillations at the Nyquist frequency when the modulation index is turned up sufficiently high. The most obvious thing to try is to modify the original formula


x[n] = sin(ωn + Ix[n-1])

by lowpass filtering the feedback signal with some filter that has a zero at half the sample rate. A two point average increases the range the index can take before the spurious oscillations set in, but it cannot stop them at sufficiently high modulation indices. A complementary trick is to put a filter outside the feedback loop. Again, it helps to a certain extent, but should not be expected to solve the problem in all cases. Finally, there is the Overkill Solution of oversampling the system. Or maybe it's not so overkill after all. In any case, a high sample rate is recommended.

feedback FM
Feedback FM waveform with spurious oscillations and no attempt to squelch them.

Depending on the sample rate, the spurious oscillations will typically ring at such a high frequency that only domestic animals will notice them and possibly object to their presence. Nonetheless, the waveform will be contaminated by a conspicuous wiggle that may annoy any purist, whether or not they hear it. Of course, it is a spurious or parasitic oscillation since it follows the sampling rate rather than the synthesis parameters. Sometimes the parasitic oscillation happens at a third or fourth of the sample rate, or other subharmonics.

Single oscillator feedback FM is limited to harmonic spectra. Much flexibility is gained  by introducing a second oscillator since there are several ways to connect the two oscillators. Rather than listing all cases separately, we introduce coupling parameters c (cross terms) and b (self-modulation) in the coupled system

      x[n+1] = sin(θ[n] + b1x[n] + c1y[n])
(*)   y[n+1] = sin(φ[n] + b2y[n] + c2x[n])

where the phases θ and φ are incremented by the modulating and carrier frequencies. Which one is which depends on what signal you send to the output. Of course both signals can be used for stereo, but then it makes less sense to call one of the oscillators the carrier and the other one the modulator.

In FM synthesis, the phase variables usually depend only on their respective frequencies. By introducing an interaction term, phase coupling can be used to synchronize the oscillators. Hardsync may have been used with FM before, but the gentler kind of sync used in the Kuramoto model is useful here, as it is also suitable for synchronizing more than two oscillators. Now, the phases are incremented by the oscillators' frequencies as usual, but to that we add a coupling term with strength K:

      θ[n+1] = θ[n] + ωc - K sin(θ[n] - φ[n])
(#)   φ[n+1] = φ[n] + ωm - K sin(φ[n] - θ[n])

Turning up K too much will collapse the two oscillators into a single strong team working in perfect sync. The system (*, #) is just a four-dimensional map with seven parameters and may thus be studied with the appropriate methods of dynamic systems.

As often happens with iterated maps, the feedback FM system exhibits typical behaviour such as period two oscillations, and the period doubling route to chaos. The frequency terms may both be set to zero, which means the system (*) becomes autonomous. Then period doublings can be seen more easily, as shown below. The system has to be seeded with nonzero initial values for any oscillation to occur.

cross-FM
Colour legendgray, period 1; orange, period 2; blue, period 4, bright yellow, period 3, red, chaos. On the horizontal axis, -1 < c1 = c2 < 4, vertical axis: - 0.5 < b1 < 5 and b2 = 0.5 (constant). The modulator and carrier frequencies are both 0; hence the coupling term does not influence the dynamics.


We are not done with feedback FM!

Other things to try include modifying the feedback signals by waveshapers and filters. Even the phase coupled signal may be filtered. Each filter adds its state variables to the system and increases its dimension. This is complicated territory; suffice it to say that filters plus strong nonlinearities or high feedback gain equals noise!


Monday, May 13, 2013

Oscilloscopes, phase plots and beyond

Similarly to how oscilloscopes trace out two signals x(t) and y(t), one might plot any related variables against each other.
Oscilloscope tangle
The above illustration shows x(t) against y(t), both of which seem to be periodic, albeit quite complicated waveforms.

Phase plots of, say, position versus momentum are another common way to display orbits of differential equations. Any recorded signal may be plotted with its numerically estimated derivative on the y-axis. Hint: use a higher order differentiating filter, not just a forward or backward difference.
phase plot



Graphs of the amplitude spectrum of one signal against that of another signal are harder to interpret. The axes then represent amplitudes, and each point is a unique frequency whose coordinates is the amplitude in signal x versus its amplitude in signal y. If the points gather near a thin line along the main diagonal, the two spectra are highly correlated.

Another novel kind of plot is the signed amplitude spectrum. It combines phase information with the amplitude spectrum and distinguishes positive and negative amplitues depending on the sign of the phase. Below, the red line is the signed amplitude spectrum of one variable and the blue line is that of a related variable in the same system.


All the illustrations come from a 3D ODE that is currently under investigation, suspected guilty of exhibiting interesting behaviour.

Thursday, May 2, 2013

Filtering with differential equations


For those of us who are more familiar with digital filters than their analog counterparts, a one pole lowpass filter is easy:

yn = (1 - β)xn + βyn-1,   0 < β < 1.

But how do you filter a signal with an ordinary differential equation?



Working backwards, we should have an ODE that says

dy/dt + Ay(t) = Bx(t)   (*)

for some suitable constants A and B yet to be found. The differential may be approximated with a forward difference over a short time interval T, so dy(nT)/dt ≈ (y(nT+T) - y(nT))/T. Setting T equal to one sampling period, the discrete time version of (*) is

(yn+1 - yn)/T + ayn = bxn

Perform some algebraic shuffling to and fro of the variables to obtain

yn+1 = bTxn + (1 - aT)yn

and recall that the filter coefficients are 1 - β = bT and β = 1 - aT, hence bT = aT. Now we introduce a new coefficient τ > 0 for the variables in (*) and set 1/τ equal to both A and B. The system then is

dy/dt = (x - y) / τ

where now τ plays the role of a relaxation time constant. The greater τ is, the slower the response of the filter. Also, when the input equals the output the derivative becomes zero, which is to say that the system has unit DC response as required.

blackboard formula

Sunday, March 24, 2013

A new kind of square root


The postmodernism generator has been translated to mathematics. Now there is a program called Mathgen that outputs nonsensical papers on the advances of mathematics, complete with theorems and references. It has certain idiosynchrasies that makes it easy to recognize its papers. Authors are often drawn from among the most famous names of mathematics, but usually getting the first initial wrong. Theorems and conjectures are generously attributed to pairs of collegues across history, often using centuries old personalities as authors of brand new theories. Who has ever heard of the Conway-d'Alembert conjecture? Well, now we have.

The tone is exactly as condescending as one might fear: 'Clearly' such and such result follows; 'as every student knows …', and what follows is invariably clear as mud. Proofs are safely omitted because they are 'obvious'.

All this remarkable research, those 'little known results', are published safely beyond accessibility in Transactions of the Kenyan Mathematical Society, South Korean Journal of Integral Category TheoryIranian Journal of Homological PDE, and the like. Surely most of these publications cannot be found at your local library anytime soon.

It is not hard to generate plain gibberish by TeX. Begin with listing a few elementary symbols and operators:

const char *alpha = "\\alpha";
const char *beta = "\\beta";
...
const char *r_arrow = "\\rightarrow";
const char *sqrt = "\\sqrt";
const char *sup = "^ ";
const char *sub = "_ ";

Put all the symbols in an array, so they can be easily accessed and picked at random. Concatenate several of the symbols into a string and print it. With some luck, the symbol sequence will not break the TeX syntax. This doesn't happen by itself, so next one might like to do something more structured. Elementary functions (program routines, that is) that generate small expressions like x ∈ ℂ2 or f : ℝ → ℝ are not hard to write.

This is a sample of the babbling that results from the mere concatenation of a few symbols and numbers without regard for syntactical rules:



Difficulties arise when operators are used, because these expect arguments. An expression should not end with, say, an empty square root with no argument, as the above formula seems to do. In fact it ends with \sqrt{%0^{\sum}}, but this is apparently beyond the wits of TeX.


The notorious mathgen paper Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE by M. Rathke contains a larger sortiment of abstruse mathematical symbols in hilarious combinations. Or what about the frequent use of various powers of zero? Already the first formula contains expressions such as 0-4, 05, and other meaningless entities such as tan(∞-1). A judicious use of elaborate idempotent expressions may even accidentally result in a true statement despite the funny appearance.


Tuesday, February 26, 2013

Sunday, February 17, 2013

Total variation


The total variation of a real valued function f on an interval I is defined as
taking the supremum over all possible partitions of I = [p0, pN]. Notably, if the function is (continuously) differentiable, the total variation becomes
but f does not have to be differentiable, and the total variation may be unbounded.

Sometimes the function itself may be evaluated at any point of the interval, although its derivative either does not exist or is far too complicated to deal with. Then the total variation may be estimated by sampling the function at several points and checking whether or not it converges to some limit as the mesh gets finer. If it doesn't, the curve may be a fractal, so its fractal dimension can be estimated from the procedure.

The length of a fractal curve is a function of the scale of measurement. As the scale of measurement ε varies, the measured length N varies according to N ~ ε −D, where D is the fractal dimension. The common procedure then is to fit a double logarithmic plot of N against ε and finding the slope. However, it would be a grave mistake to blindly accept any automatically calculated slope without checking the error of the fit.

Estimating the total variation at several arbitrary sampling resolutions can be inefficient, unless a clever trick is used. Suppose we begin with a fine resolution with uniform distance Δ = xi - xi-1 > 0 between the points. Then it is easy to obtain the total variation for subdivisions by nΔ, for n = 1, 2, … just by skipping so many points. Even better, one can take averages
so as to obtain estimates that do not depend (as much) on the particular chosen sample points.  

A somewhat related concept is arc length, which is, conceptually, the length of a string superposed on the graph of the function (assuming the function is continuous). The total variation is smaller than the arc length. For the straight line y = kx, 0 < x < t, the total variation squared is V2 = (kt)2 as compared to the arc length squared which is t2+(kt)2. Now suppose the function is monotonous over the interval under consideration. Then, if the function is deformed so as to become more curved, only the arc length will increase while the total variation remains the same. For example, if fn(x) = xn, 0 ≤ x ≤ 1 and n = 1, 2, ..., then the arc length approaches 2 as n increases, whereas the total variation remains 1.

Saturday, January 26, 2013

The derivative of products

An elementary proof 

Knowing some important formulas by heart can be very useful, but if one knows how to derive them, it is no longer necessary to remember the formula. From reading math textbooks (many  or most of them?), one can gain the false impression that the process of deriving a formula follows the same sequence as the proof.

Here is an elementary proof that nevertheless involves some not so obvious steps. 

Suppose that


then the formula for the derivative is 


but how do we prove this? Although the proof is straightforward, it is perhaps difficult to remember all the tricks that are required and when to apply them. Here is a standard proof. First, apply the definition of derivative to the product of the two functions:



The next step is the crucial operation, at once trivial and far from obvious. We are going to both subtract and add u(x)v(x+h) and rewrite the ratio as


Now, who would think of adding two terms that sum to 0 into such an expression? This is an idea that doesn't make much sense at this point. Indeed, one needs to look a few steps ahead and see what it is going to be needed for. What follows are just some simple factorizations of terms.


Break out some terms to get 


then take limits and replace derivatives, and we are done:




Here, the simple formula seems much easier to memorize than all the steps of the proof. (In fact, you may impress your friends far more if you memorize Hugo Ball's poem Karawane than if you learn to recite the steps of this proof.)

It is highly misleading when formulas such as the above are just plainly stated and then concisely proven. This is most likely not how the formulas were originally discovered. Rather, one would observe a few instances of derivatives of multiplied functions and conjecture a formula. Then, starting from the formula as well as the definition of derivative, one would work backwards and find all the arithmetic manipulations that make the proof work.

Instead of learning a fixed set of steps that are used in particular proofs, one would probably learn a bag of tricks that can be applied in various situations. Then, out of this bag one can grab various operations that can be tried out, until something is found that leads the proof in a promising direction.


Tuesday, January 15, 2013

Against gadgetry


Purportedly intelligent functionality increasingly finds its way into consumer electronics of all kinds. Consider video cameras as an example. Instead of manual brightness and focus controls, these can be handled automatically by pointing the camera at the right target. Fine, except that this makes it more difficult to gain control over the footage if the automation cannot be overridden.

In theory, it would be possible to have a function on your camera that finds out the name of a person whose face you point it at. Similar risks and annoyances are likely to crop up in all places where too much electronic connectivity is built into products.

Someone said that a good music instrument is one that allows you to play badly. It doesn't correct your mistakes, so you have to practice. If you have practiced and try playing a gadget that corrects your mistakes, it will only stand in your way.

There are many reasons to keep things simple and stupid. Open modular systems is one interesting trend offering the perfect antidote to these over-designed digital marvels.


Friday, January 11, 2013

Antropocene

An excellent resource for understanding the current understanding of climate change is the recent series of blog posts by John Baez. Whereas journalists oversimplify matters, diving straight into the research papers would be overwhelming. Baez is the ideal guide if you know some elementary mathematics (just the basics of ordinary differential equations will do). He begins by explaining concepts such as albedo and the energy balance due to incoming and reflected sunlight. Then some positive and negative feedback mechanisms are introduced, and there is a discussion of glacial cycles, bistable models and stochastic resonance. All of it is very accessibly explained; it is definitely worthwhile to take the time and digest this material.

It begins here with some slides and continues as a series of blog posts.

Another useful source of information is skeptical science, especially if one ever needs to debunk the myths that people pass on without checking the facts.