Musical Idea 18: Expansions on Quantization
02.01.2014 § Leave a comment
Before departing from the topic of quantization gravity, I’d like to expand on some applications which will allow you to make use of it for things you wouldn’t be able to do strictly using the methods already described.
So yes, first, here comes the part (which always seems to come up with these ideas of mine) where I have to explain that not everything which — as the composer — you can clearly see that you’re doing to the music is perceptible as such to your blind listeners. Fortunately in this case it’s all pretty subtle, kind of a matter of semantics, and there are decent alternatives.
So far, all of examples we’ve worked with here have been dealing with pitch and rhythm. I expect that pitch and rhythm will upon deeper exploration be found still to be the musical aspects for which quantization gravity provides the most interesting results. This is because given sets of points along their continua tend to sound quite distinct from each other. All I’m really saying is that different tunings sound much different than each other, and different beats sound much different than each other, but different sets of, say, timbres do not sound much different from each other.
You might knee-jerk object to that last statement. I mean, of course if you were quantizing music between sounding like an entire orchestra to only single instruments within each section at a time, you could appreciate that, right? Suppose first you pull all woodwinds into being oboes and all horns into being trumpets, and then later you instead pull all woodwinds into being piccolos and all horns into being trombones. Well, the issue with this example is that the timbres that constitute an orchestra do not exist on a continuum with each other; the timbres that constitute each of these sections don’t even exist on a continuum with each other. Remember that pitch is frequency and rhythm is duration, both of which are quantifiable: they’re related on a number line; timbres, on the other hand, are too complexly different from each other to be considered in terms of each other. They’re just incomparable quantitatively as groups; you can compare two at a time, in degrees from being-one to being-the-other, but as soon as you try to say that a third timbre exists in-between the two, you surrender your right to consider what you’re doing as “quantization”, since it’d be totally arbitrary.
So now let’s look at such an actually quantifiable timbral continuum, say, from violin to tuba. And imagine that our underlying ideal music moves fluidly between these two timbres. And imagine that we have two different quantization lattices, each with twelve points between violin and tuba, but they are completely different points. Now imagine listening to the underlying ideal music. Now it gets pulled completely to the first quantization lattice. Now it goes back to the ideal. Now it gets pulled completely to the second quantization lattice. I expect we could hear the difference between the underlying ideal and either of the quantizations, but I have doubts that we would be able to discern one quantization from the other, in the way that we would easily be able to identify the difference between two completely different sets of twelve frequencies chosen from a pitch continuum as completely different tuning systems, or the difference between two completely different arrangements of onsets in a repeating rhythmic bar. Tweak one note in your pitch set by the smallest amount and you’ve got a different key, a different mood; tweak one gap or cluster in your beat and you’ve got a totally different groove. It just doesn’t work that way with a timbral continuum, though, and it’s because nearby points on one are insufficiently distinct, whether played together, or played apart:
- Sound an A and an out-of-tune A simultaneously and it makes an pronounced beating sound, while if you play a 2:3::violin:tuba and a 3:4::violin:tuba simultaneously no one will bat an eye.
- We can tell an A from an A flat when played in isolation. However, we can’t really identify 2:3::violin:tuba or a 3:4::violin:tuba.
So in conclusion, you could use quantization gravity on timbre to share with your listeners the distinction between the non-quantized version and a quantized version in general, but not really say much about the nature of that quantization (such that you could distinguish it from other quantizations) other than simply that it is quantized, or perhaps the overall roughness with which it has been quantized (a quantization with a total of five points will definitely sound more quantized than one with twelve… you can’t add too many points or you won’t be able to tell it’s quantized).
Most of the other aspects that I’ve proposed so far in this series, such as diegeticity, voxicity, entitative ken, the noise aspect, etc. would not really be able to take full advantage of this offering of quantization gravity for the reason too: that restricted sets of values along their continua are insufficiently distinctive.
In fact, I can think of only one aspect I’ve discussed besides pitch and rhythm that would work: spatiality. While direction is smoothly continuous, we also perceive distinct points within it. We have words for each of the two directions on each of the three main Euclidian spatial directions, which bind our perception: forward, backward, above, below, to the left, to the right. 6 choose 3 is 20, the 8 corners and 12 edges of this cube, in other words, the next simplest set of directions we conceive of, the combinations such as above and to the left, or above and to the left and in front. Throw in “smack dab in the middle” as an option, plus maybe grant the possibility of “closeby” and “far off” for each of the other 26, and we’ve already got 53 distinct spatial positions (nice, like 53TET). It’s not too difficult to imagine that you could have a voice start out whipping all over the place, but then via quantization gravity get pulled into snapping into one of twelve directions, the cube’s edges, and that definitely is easily perceptible and has meaning. (Spatiality doesn’t pass the test of neighboring points being strikingly distinct from each other, as pitch and duration do, so I have some doubt, but am nonetheless excited to try it out).
So the point of this entry is to show that while the orchestral situation has been disqualified as true quantization, we can still put quantization to interesting effect on it. There are really two approaches here.
One is to just go ahead with the suggestion of giving each timbre its own dimension. Given the context before, technically I had suggested that each timbre would only get its own direction, but for these purposes, unless you had pairings for each timbre with another that you wanted them to be opposites for some reason, I would go ahead and give each timbre its entire own dimension. So you’d end up with a like 30- or 50-dimensional space. These would be ray type dimensions, as opposed to reality’s line type ones, because there is a terminus on one end, the not-being-this-timbre end, since you can’t be negative a timbre. The other direction could go on infinitely, I suppose, toward infinite being-this-timbre-ness, and you’d devise things so that the end result timbre of your voice was basically the ratio of being-timbres to each other. As for what you’d do when every timbre was at zero, well, good luck buddy, maybe make it a sine wave or something — just don’t do that.
The reason to take this approach is that it’s still truly quantization, recognizing that the timbres are numerically incomparable. However, it is important to recognize that you can not tell all timbres “closest” to oboe in sound to become more oboe-like? There are no “neighboring sounds” because nothing is in terms of anything else. Dimensions are by definition totally independent, incomparable properties; in a traditional Euclidian space, x has no more affinity for y than it does for z.
The way this will work is that when you quantize you’ll choose the set of points like this:
The result being that each entity will become purely whichever timbre it is the most at a given moment. So even if an entity was currently sounding 25% like a cello, but overall much more like a woodwind since the rest consisted of 15% each of piccolo, flute, oboe, clarinet, and bassoon, the entity would not end up as a woodwind — this system is completely agnostic about our human sensibilities about groupings of similarity between these timbres.
Now alternatively you could choose a more random set of points, like:
But the problem you’ll run into there is that each entity will have some bizarre mutt timbre. If the underlying ideal is unrestrained and entities ideally would each individually be all over the place in timbre, then they’re going to be switching from point to point, but with all points mutts, they will be too difficult to distinguish between, and thus it will be difficult to perceive the continuity when any given entity switches timbres. While you could further stipulate that the underlying ideal reel itself in a bit, I feel like this runs counter to the essence of quantization gravity, since then why would you even need to consider your effect in these terms? You might as well go all the way and just tell your underlying ideal to do all the halting of timbral motion itself, and then why need to refer to it as an underlying ideal for those purposes? I think the essential point for the ideal is to be completely free and underlying a quantization lattice, and while subject to restraint by the lattice, doesn’t work together with it or respect it.
The second approach to using quantization on timbre, unlike the first approach, enables you to create a system of comparison between your timbres so that you can consider them to sound more or less like each other, and thus quantize toward or away each other accordingly. The sacrifice is that you accept that it’s no longer truly quantization since your system is arbitrary. Most of the time this will probably not be too big of a deal — your call.
All you have to do is make a space with enough dimensions that you can realize whatever system of comparison you’re imagining. I suspect that while most people’s perception of groupings of instruments won’t be able to be fully captured on a 1D line, many people’s will work out on a 2D plane, and 3 dimensions will suffice for almost all systems. So basically rather than correlating timbres with directions or dimensions, you make them points: e.g. for the 2D example, you could do top left, woodwinds; top right, brass; bottom right, strings; bottom left, keys/percussion. And thus if you feel like an oboe is a “brassy” sounding woodwind then you’d plot it somewhere to the left of top center, and maybe piano, being a “stringy” key/percussion, you could put to the left of bottom center, etc.
The next step to explain this situation is to understand that basically what you’ve created is a space where the timbre of every point is not based on whether its far in one direction or another, but its distance from each of these defining points you’ve put. Then you just do traditional quantization gravity on this space, but in this case you’ll want to be mindful of where the defining points within it are.