Friday, December 11, 2015

On Modular Purism and Sozialrealismus


Among practitioners of modular synthesis, there is today a widespread reverence for the pure modular, where the modular shines on its own as the single sound source, preferably recorded in one take and with minimal post-processing. In improvised music this kind of purism may make sense, but modulars are often used in other ways, e.g. driven by sequencers or set up as a self-generating system more or less nudged in the right direction by the modularist.

Playing live in a concert, overdubs or edits are obviously not a part of the game. Perhaps that spontaneous flow of live performance is taken as the ideal form that even home studio recordings should mimic.

This ideal of purity often serves as an excuse for acquiring a voluminous modular system. Even more so if one wants to achieve full polyphony. However, there is so much to gain from multi-tracking and editing that it is a wonder why anyone with a modular should refuse to deal with that part.



The pieces on the album SOZIALREALISMUS are collages and juxtapositions of a variety of sources. All pieces are centered around recordings of a eurorack modular, often accompanied by field recordings.

In some pieces the electronic sounds were played through loudspeakers placed on the resonating bodies of acoustic instruments and then recorded again. Recordings have been cut up and spliced together in new constellations.

Although the blinking lights and the mess of patch cords of a modular projects an image of complicated machinery leading its autonomous electric life, another aesthetic is possible even in the realm of modular music, an aesthetic of handicraft, allowing the imperfections of improvisation. The included drawings that come with the digital album and the linoleum print cover on the cassette hint in that direction.

Linoleum print © Holopainen. Makes for an interesting stereo image as well.

There has always been a focus on technique and gear in the electronic music community. When modulars are becoming more common, their ability to provoke curiosity may dwindle if the music made with them fails to convince the listeners.



Wednesday, August 12, 2015

Golomb Rulers and Ugly Music

A Golomb ruler has marks on it for measuring distances, but unlike ordinary rulers it has a smaller number of irregularly spaced marks that still allow for measuring a large number of distances. The marks are at integer multiples of some arbitrary unit. A regular ruler of six units length will have marks at 0, 1, 2, 3, 4, 5 and 6, and it will be possible to measure each distance from 1 to 6 units. A Golomb ruler of length six could have marks at 0, 1, 4 and 6.
Each distance from 1 to 6 can be found between pairs of marks on this ruler. A Golomb ruler that has this nice property that each distance from 1 to the length of the ruler can be measured with it is called a perfect Golomb ruler. Unfortunately, there is a theorem that states that there are no perfect Golomb rulers with more than four marks.

Sidon sets are subsets of the natural numbers {1, 2, ..., n} such that the sums of any pair of the numbers in the set are all different. It turns out that Sidon sets are equivalent to Golomb rulers. The proof must have been one of the lowest hanging fruits ever of mathematics.

An interesting property of Golomb rulers is that, in a sense, they are maximally irregular. Toussaint used them to test a theory of rhythmic complexity precisely because of their irregularity, which is something that sets them apart from more commonly encountered musical rhythms.

There is a two-dimensional counterpart to Golomb rulers which was used to compose a piano piece that, allegedly, contains no repetition and is therefore the ugliest kind of music its creator could think of.

Contrary to what Scott Rickard says in this video, there are musical patterns in this piece. Evidently they did not consider octave equivalence, so there is a striking passage of ascending octaves and hence pitch class repetition.

At first hearing, the "ugly" piece may sound like a typical 1950's serialist piece, but it has some characteristic features such as its sequence of single notes and its sempre forte articulation. Successful serialist pieces would be much more varied in texture.

The (claimed) absence of patterns in the piece is more extreme than would be a random sequence of notes. If notes had been drawn randomly from a uniform distribution, there is some probability of immediate repetition of notes as well as of repeated sequences of intervals. When someone tries to improvise a sequence of random numbers, say, just the numbers 0, 1, they would typically exaggerate the occurrences of changes and generate too little repetition. True randomness is more orderly than our human conception of it. In that sense the "ugly" piece agrees with our idea of randomness more than would an actually random sequence of notes.

When using Golomb rulers for rhythm generation, it may be practical to repeat the pattern instead of extending a Golomb ruler to the length of the entire piece. In the case of repetition the pattern occurs cyclically, so the definition of the ruler should change accordingly. Now we have a circular Golomb ruler (perhaps better known as a cyclic difference set) where the marks are put on a circle, and distances are measured along the circumference of the circle.

Although the concept of a Golomb ruler is easy for anyone to grasp, some generalization and a little further digging leads into the frontiers of mathematic knowledge with unanswered questions still to solve. 

And, of course, the Golomb rulers make excellent raw material for quirky music.

Monday, March 30, 2015

Formulating a Feature Extractor Feedback System as an Ordinary Differential Equation

The basic idea of a Feature Extractor Feedback System (FEFS) is to have an audio signal generator whose output is analysed with some feature extractor, and this time varying feature is mapped to control parameters of the signal generator in a feedback loop.



What would be the simplest possible FEFS that still is capable of a wide range of sounds? Any FEFS must have the three components: a generator, a feature extractor and a mapping from signal descriptors to synthesis parameters. As for the simplicity of a model, one way to assess it would be to formulate it as a dynamic system and count its dimension, i.e. the number of state variables.

Although FEFS were originally explored as discrete time systems, some variants can be designed using ordinary differential equations. The generators are simply some type of oscillator, but it may be less straightforward to implement the feature extractor in terms of ordinary differential equations. However, the feature extractor (also called signal descriptor) does not have to be very complicated.

One of the simplest possible signal descriptors is an envelope follower that measures the sound's amplitude as it changes over time. An envelope follower can be easily constructed using differential equations. The idea is simply to appy a lowpass filter (as described in a previous post) to the squared input signal.

For the signal generator, let us consider a sinusoidal oscillator with variable amplitude and frequency. Although a single oscillator could be used for a FEFS, here we will consider a system of N circularly connected oscillators.

The amplitude follower introduces slow changes to the oscillator's control parameters. Since the amplitude follower changes smoothly, the synthesis parameters will follow the same slow, smooth rhythm. In this system, we will use a discontinuous mapping from the measured amplitudes of each oscillator to their amplitudes and frequencies. To this end, the mapping will be based on the relative measured amplitudes of pairs of adjacent oscillators (remember, the oscillators are positioned on a circle).

Let g(A) be the mapping function. The full system is

fefs-equation
with control parameters k1, k2, k1, K and τ. The variables θ are the oscillators' phases, a are the amplitude control parameters, A is the output of the envelope follower, and x(t) is the output signal. Since x(t) is an N-dimensional vector, any mixture of the N signals can be used as output.

Let the mapping function be defined as

mapping-function

where U is Heaviside's step function and bj is a set of coefficients. Whenever the amplitude of an oscillator grows past the amplitude of its neighboring oscillators, the value of the functions g changes, but as long as the relative amplitudes stay within the same order relation, g remains constant. Thus, with a sufficiently slow amplitude envelope follower, g should remain constant for relatively long periods before switching to a new state. In the first equation which governs the oscillators' phases, the g functions determine the frequencies together with a coupling between oscillators. This coupling term is the same as is used in the Kuramoto model, but here it is usually restricted to two other oscillators. The amplitude a grows at a speed determined by g but is kept in check by the quadratic damping term.

Although this model has many parameters to tweak, some general observations can be made. The system is designed to facilitate a kind of instability, where the discontinuous function g may tip the system over in a new state even after it may appear to have settled on some steady state. Note that there is a finite number of possible values for the function g: since U(x) is either 0 or 1, the number of distinct states is at most 2N for N oscillators. (The system's dimension is 3N; the x variable in the last equation is really just a notational convenience.)

There may be periods of rapid alteration between two states of g. There may also be periodic patterns that cycle through more than two states. Over longer time spans the system is likely to go through a number of different patterns, including dwelling a long time in one state.

Let S be the total number of states visited by the system, given its parameter values and specific initial conditions. Then S/2N is the relative number of visited states. It can be conjectured that the relative number of states visited should decrease as the system's dimension increases. Or does it just take much longer time for the system to explore all of the available states as N grows?

The coupling term may induce synchronisation between the oscillators, but on the other hand it may also make the system's behaviour more complex. Without coupling, each oscillator would only be able to run at a discrete set of frequencies as determined by the mapping function. But with a non-zero couping, the instantaneous frequencies will be pushed up or down depending on the phases of the linked oscillators. The coupling term is an example of the seemingly trivial fact that adding structural complexity to the model increases its behavioural complexity.

There are many papers on coupled systems of oscillators such as the Kuramoto model, but typically the oscillators interact through their phase variables. In the above model, the interaction is mediated through a function of the waveform, as well as directly between the phases through the coupling term. Therefore the choice of waveform should influence the dynamics, which indeed has been found to be the case.

With all the free choices of parameters, of the b coefficients, the waveform and the coupling topology, this model allows for a large set of concrete instantiations. It is not the simplest conceivable example of a FEFS, but still its full description fits in a few equations and coefficients, while it is capable of seemingly unpredictable behaviour over very long time spans.

Friday, February 20, 2015

Capital in the 21st century: book review

Thomas Piketty: 

Le capital au XXIe siècle. Seuil 2013

Economic inequality concerns everyone, we all have an opinion about what is fair. On the other hand, equity has not been a primary concern for most economists in recent times, at least not until the publication of Piketty's work. This review is based on the original French edition, although the book has already been translated to several languages.

Piketty's approach is historical, beginning with an overview of capital and revenues in the rich countries from the 18th century to the present. While it turns out to be true that inequalities have increased since the mid 20th century, it is perhaps surprising to learn that there were once even greater inequalities, peaking in the decades before the first world war. Social mobility was low and the best way to ensure a comfortable living standard for someone who was not born into a wealthy family was to marry rich rather than try to get a well paid job. When the gaps between the rich and the poor shrank in the 20th century, at least until the 1970s, Piketty argues that this happened mainly as a consequence of the two world wars. Governments appropriated private capital and introduced heavy marginal taxes that gradually led to a redistribution of wealth and the emergence of a middle class.

Le capital is written for a broad audience with no prior knowledge of economic theory. Central concepts are clearly explained in an accessible way. Mathematical formulas beyond plain arithmetic are avoided, even when their use could have simplified the exposition. However, Piketty stresses that economics is not the exact science it is often believed to be, and he is rightly sceptical about the exaggerated use of fancy mathematical theories with little grounding in empirical facts.

Although Piketty tries to avoid the pitfalls of speculative theories, he still makes some unrealistic thought experiments, not least concerning population growth and its bearings on economic growth. Population growth has two important effects: to increase economic growth and to contribute to a gradual diffusion of wealth. When each person on average has more than one child who inherits equal parts of their parents' patrimony, the wealth gradually becomes more equally distributed. Now, it does not require much imagination to realise that population growth cannot go on for much longer on this finite planet, and large scale space migration is not a likely option any time soon.

Piketty is at his best in polemics against other economists or explaining methodological issues such as how to compare purchasing power across the decades. Some products become cheaper to produce, so being able to buy more of them does not necessarily imply that the purchasing power has risen. Entirely new kinds of products such as computers or mobile phones enter the market, which also makes direct comparisons complicated.

Human capital is often supposed to be one of the most valuable resources there are, but Piketty seems uneasy about the whole concept. Indeed, when discussing the American economy in the 19th century, a conspicuous form of human capital enters the balance sheets in the form of slaves with their market value. The wealth in the Southern states of America before abolition looks very different depending on whether you think it is possible to own other human beings or not.

Top incomes in the United States have seen a spectacular rise in the last decades. An argument often advanced in favour of exceptionally high salaries is that productive people should be rewarded in proportion to their merits. The concept of marginal productivity describes the increase in productivity as a job is done by a person with better qualifications. However, there is no reliable way to measure the marginal productivity, at least not among top leaders. In practice, as Piketty argues, there is a tendency to "pay for luck" and not for merit as such; if the company happens to experience success, then its CEO can be compensated. The modern society's meritocracy, Piketty writes, is more unfair to its losers than the Victorian society where nobody pretended that the economic differences were deserved. At that time, a wealthy minority earned 30-50 times as much as the average income from revenues of their capital alone.
Cette vision de l'inégalité a au moins le mérite de ne pas se décrire comme méritocratique. On choisit d'une certaine façon une minorité pour vivre au nom de tous les autres, mais personne ne cherche à prétendre que cette minorité est plus méritante ou plus vertueuse que le reste de la population. [...] La société méritocratique moderne [...] est beaucoup plus dure pour les perdants (pp. 661-2).
The level of marginal taxes and progressive taxation appear to be decisive for top incomes. Lowering taxes actually makes it easier for top leaders to argue in favour of increased salaries, whereas high marginal taxes mean that most of the increase will be lost in taxation anyway. On the other hand, there is the problem of tax havens or fiscal competition between countries that is not easily solved. In addition to that, lobbyists spend quite some money on trying to convince politicians to keep taxes low, as a new study from Oxfam has shown.

The relative proportions of capital and revenue in the rich countries as it varies over time is studied in detail. The amount of capital is usually equivalent of a few years of revenue. However, the curve of capital over time as measured in years of revenue is found to be U-shaped, with a dip during the two world wars. This relative balance of capital and revenue actually sheds light on the structure of wealth distribution. Most capital is private and consists of real estate and stocks. There are mechanisms that make capital grow, effortlessly as it seems: "L'argent tend parfois à se reproduire tout seul."

Here the interesting principle of capital-dependent growth comes to play. As it happens, the more capital you have to begin with, the faster you can make it grow. This is the case even in bank accounts where the interest rate on savings is usually higher above a certain threshold. However, those interest rates are usually not higher than barely to compensate for the effects of inflation, and for small amounts of savings they are usually lower. On the other hand, capital in the form of real estate or stocks may grow faster than wages. One reason for the level-dependent growth of wealth is that it is easier to take financial risks and to be patient and await the right moment with a larger initial amount of capital, but the most important explanation, according to Piketty, is that the richest investors have more options to engage expert advisors when making their placements so they can seek out less obvious investments with high capital revenues.

Inheritance, rather than well paid work, was the way to wealth in the 19th century. To what extent are we about to return to that economic structure today? Indeed, there are worrisome indications that today's society risks a return of the rentiers, or persons of private means. Piketty's solution to the wealth distribution problem is, roughly, to increase the transparency of banks and to impose a progressive tax on capital in addition to revenue taxation.

Shortcomings

Capital has become a bestseller and has already had a significant impact on the public debate (e.g. a recent report from citi GPS on the future of innovation and employment). Piketty leaves the old quarrels between communism and free market capitalism behind and proposes solutions based on evidence rather than wishful thinking or elegant theories with little grounding in reality. However, the last word in this debate has not been said. Free market proponents will have their familiar criticism, but it seems more appropriate to point out the shortcomings and the limited perspective of Piketty's account from an entirely different point of view. This is still some flavour of classical economic theory that does not seriously consider the role of energy consumption for the economy, nor is it sufficiently concerned with the finite resources of minerals including fossil fuels that are being depleted or the problems of pollution, global warming and ecological collapse that will soon have tremendous impacts on the economy. Piketty is probably fully aware of these problems and yet, with the exception of a brief mention of the Stern Review he neglects to make them a part of the narrative which therefore becomes one-sided. Perhaps this is an unfair criticism since the aim of the book is to demonstrate the mechanisms driving the increasing inequality in a historical perspective, but nonetheless important facets are missing.

Although no background knowledge of economics is assumed and many concepts are lucidly explained to the general audience, the book is not trying to be an introductory text book on economic theory.  Fortunately there are many resources to read up on the basic mechanisms of economy that include the environment and energy resources as part of the equation. Gail Tverberg's introduction is a good place to start. Many interesting articles appear at resilience.org. Ugo Bardi's blog resource crisis is worth a visit for further reading. The interwoven problems of debt, peak oil and climate change have been discussed at length by Richard Heinberg, and also in a previous post here. The new keyword that will shape our understanding of the economic system is degrowth.

Wednesday, January 7, 2015

Decimals of π in 10TET


The digits of π have been translated into music a number of times. Sometimes the digits are translated to the pitches of a diatonic scale, perhaps accompanied by chords. The random appearance of the sequence of digits is reflected in the aimless meandering of such melodies. But wouldn't it be more appropriate, in some sense, to represent π in base 12 and map the numbers to the chromatic scale? After all, there is nothing in π that indicates that it should be sung or played in a major or minor tonality. Of course the mapping of integers to the twelve chromatic pitches is just about as arbitrary as any other mapping, it is a decision one has to take. However, it is easier to use the usual base 10 representation and to map it to a 10-TET tuning with 10 chromatic pitch steps in one octave.

Here is an etude that does precisely that, with two voices in tempo relation 1 : π. The sounds are synthesized with algorithms that also incorporate the number π. In the fast voice, the sounds are made with FM synthesis where two modulators with ratio 1 : π modulate a carrier. The slow voice is a waveshaped mixture of three partials in ratio 1 : π : π2.



Despite the random appearance of the digits of π, it is not even known whether π is a normal number or not. Let us recall the definition: a normal number in base b has an equal proportion of all the digits 0, 1, ..., b-1 occuring in it, and equal probability of any possible sequence of two digits, three digits and so on. ("Digit" is usually reserved for the base 10 number system, so you may prefer to call them "letters" or "symbols".) A number that is normal in any base is simply called normal.

Some specific normal numbers have been constructed, but even though it is known that almost all numbers are normal, the proof that a number is normal is often elusive. Rational numbers are not normal in any base since they all end in a periodic sequence, such as 22/7 = 3.142857. However, there are irrational, non-normal numbers, some of which are quite exotic in the way they are constructed.