Thursday, February 28, 2008

The isometric room

The following is a demonstration coded in JavaScript of the Filmation technique we described at a prior entry. Click on the image so that it gains the keyboard focus and then use the keys Q, S, P, L and the space bar to move the hero around. Much of the graphic material has been obtained or adapted from Supertotto with the kind permission of its author Totto Renna.
The most interesting part of the code deals with the key issue of sorting the scene objects according to the "is behind" relation:
function depth_sort(objs)
  for(var pivot=0;pivot<objs.length;){
    var new_pivot=false;
    for(var i=pivot;i<objs.length;++i){
      var obj=objs[i];
      var parent=true;
      for(var j=pivot;j<objs.length;++j){
        var obj2=objs[j];
depth_sort can be seen as a very crude implementation of topological sorting where an edge from object A to object B exists if A is behind B. depth_sort running time is O(n3), acceptable only for low values of n. During the execution of the routine, the variable pivot separates the elements that have been sorted so far from those that still have not. The observant reader might have noticed that the presence of the following line:
is an expedient hack that acts just in case the pivot did not advance after a pass through the elements; this happens when there are "is behind" cycles as we discussed previously. You can try yourself to move the blocks into such cyclic positions and see the effect on rendering.

Sunday, February 24, 2008

Derivation without a virtual destructor

There is a piece of advice, commonly found in C++ newsgroups and tutorials, that classes without a virtual destructor should not be derived from, on grounds that deleting an object of the derived class through a pointer to the base class is undefined behavior:

struct A

struct B:A{};

void f(A* a)
delete a;

int main()
B* pb=new B();
f(pb); // undefined behavior

This is often invoked as an argument against the suitability of deriving from fat classes (like std::string or the STL containers) to provide some extra functionality without adding state.

I do not agree with this rule forbidding derivation without a virtual destructor, and in fact I think that the advice is wrong in most situations. For one, passing an object of a derived class is not the only way of producing undefined behavior in the context described above:

int main()
A a;
f(&a); // undefined behavior

What does this teach us? Well, passing an object through a pointer does not mean that the object is deletable, and conversely it does not mean either that the function accepting the object will always try to delete it. In most (all?) situations the caller and the callee operate under a contract establishing whether a given argument passed by pointer can be deleted by the callee or not. If it is not, passing a pointer to a derived class is perfectly legal.

And adding to that, classes without virtual destructors are rarely passed by pointer, and even more rarely allocated in the heap; assuming that a class without a virtual destructor almost surely is not virtual at all, handling it polymorphically makes little sense. Consider how many times you've used std::string in your code, how frequently those std::strings were passed by pointer and, finally, in how many situations (if any) some piece of your code deleted a pointer to a std::string.

So, my opinion is that deriving from a non-virtual (and hence, without virtual destructor) class, such as std::string or STL containers, is valid in most cases, and indeed is an expedient method to add functionality to a class without resorting to tedious techniques like manual forwarding. One only has to be careful that objects of the derived class are not passed via a pointer to the base class in contexts where the pointer will be deleted; this is about as evident as avoiding passing addresses of stack objects under the same scenario.

Wednesday, February 20, 2008

Filmation math

In 1984, European teenagers were thunderstruck by the appearance of the ZX Spectrum hit Knight Lore from a firm called Ultimate Play the Game. This adventure game featured a rendering technique like we hadn't ever dreamed of before.

In a world of 48 KB, 2D platform games, Knight Lore represented a quantum leap forward with its strikingly realistic 3D world based on isometric perspective. Ultimate called this rendering technique Filmation. I used to wonder how they could implement Filmation in such a limited computer as the ZX Spectrum, but the math behind it turns out to be simpler than it might seem.

A given room consists of a background set (the stone walls in the example above) and a bunch of objects. We can assume that the objects are roughly convex: if an object is not convex, for instance if it has holes like the arched gateways, we can just decompose it into a number of convex elements.

Since isometric perspective does not introduce any scaling effect (distant objects are drawn the same size as closer ones), rendering a frame boils down to drawing the background set and then all the room objects, applying a simple projection rule:

(x,y,z) → (x0 + √3 x/2 − √3 y/2, y0x/2 − y/2 + z)

which is generally approximated by the following:

(x,y,z) → (x0 + xy, y0x/2 − y/2 + z)

so as to simplify computations as well as graphic design (the x and y axes are projected to easily pixelizable lines wih slope 1/2). But there is one detail left: if an object A appears behind B, we must draw A before we render B. So, room objects have to be ordered for drawing purposes according to the "is behind" relation. In which situations is A behind B? As objects are assumed to be convex and they do not overlap (i.e. AB = ∅), we can approximate them by prisms with opposite vertices (xA,yA,zA), (x'A,y'A,z'A) and (xB,yB,zB), (x'B,y'B,z'B), with xA<x'A, yA<y'A, zA<z'A, and similarly for B. These prisms are projected to hexagons as depicted in the figure:

A little calculation shows that the hexagons for A and B overlap if and only if:

  1. [xAy'A,x'AyA] and [xBy'B,x'ByB] overlap, and
  2. [xAz'A,x'AzA] and [xBz'B,x'BzB] overlap, and
  3. [−y'A+zA,−yA+z'A] and [−y'B+zB,−yB+z'B] overlap.

If the hexagons do not overlap, it is immaterial which object we draw first; if they do overlap, we can apply the following easily provable properties:

  1. x'A < xBA is behind B,
  2. x'B < xAB is behind A,
  3. y'A < yBA is behind B,
  4. y'B < yAB is behind A,
  5. z'A < zBA is behind B,
  6. z'B < zAB is behind A.

Since the prisms A and B do not overlap (in 3D), it can be seen that at least one of the clauses 1-6 can be applied, so we can use these properties to sort all the room objects and then proceed to drawing. Note that the relation "is behind" defined by the algorithm above is not total: if the projections of two blocks do not intersect, then neither block is behind the other; thus, in general blocks can be sorted in more than one acceptable manner.

Are we done then? It looks so, and actually a rendering algorithm can be implemented along these lines and seemingly works fine. But there is a fundamental flaw in our analysis. The relation "is behind" is not transitive:

In the figure, A is behind B, B is behind C and C is behind A! There is no way that these objects can be sorted so that they are rendered properly; if we wish to keep our basic algorithm, at least one of the objects should be split in two so as to restore transitivity, but knowing when and how to do this is not trivial. How did Ultimate programmers solve this problem? Well, the answer is that they did not solve it. Consider for instance the following snapshot, taken from Nigel Barford's Java Spectrum Emulator:

One table is on top of the other and slightly displaced with respect to it. If we position our hero so that he stands behind the bottom table and in front of the top one, something unexpected happens:

The top table is suddenly rendered as if it were behind the bottom table: if you examine this configuration you will see that we have indeed an antitransitive cycle composed of the two tables and the main character. As these situations are uncommon, the programmers of the Filmation engine decided to just ignore them. So, next time you play a Filmation game and observe one of these rendering quirks, you will know the math behind them.

Saturday, February 16, 2008

A non-Euclidean characterization of convexity: properties

The extension of the concept of convexity closure we devised at a prior entry has some of the usual properties of standard convexity. In the following we consider a generic metric space S, and use the following notation:

CX := X \ S,
XY := X is included in Y,
XY := X is a superset of Y.

Also, we drop parentheses in some expressions to make them more readable.

Theorem. Hwave verifies the following properties (proof trivial):

  1. Hwave(Ø) = Ø.
  2. XHwave(X).
  3. XYHwave(X) ≤ Hwave(Y).

Lemma. For any sets X, YS, r ≥ 0, the following properties hold:

  1. Br(X U Y) = Br(X) U Br(Y).
  2. Br(X Y) ≤ Br(X) Br(Y).
  3. CBrCBrCBrCBrX = CBrCBrX.

Proof. 1 and 2 are trivial. As for 3, an element x belongs to CBrCBrX iff Brx BrX, hence BrCBrCBrXBrXCBrCBrCBrXCBrXBrCBrCBrCBrXBrCBrX CBrCBrCBrCBrXCBrCBrX (the converse inclusion is trivial).

For Hwave to qualify as a closure operator it must be idempotent, i.e. Hwave(Hwave(X)) = Hwave(X), so that we can define convex sets as those in the range of Hwave. This property is not verified in general; as a counterexample take S = N \ {3} with dist(n,m) = |m n| and X = {0}, resulting in Hwave(X) = {0,1}, Hwave(Hwave(X)) = {0,1,2}. So, we must impose some restrictions on S to guarantee the idempotence of Hwave.

Definition. S is regular if BrXBrYBsXBsY for all X, Y in S and rs.

Lemma. If S is regular, CBrCBrCBsCBsX CBmax{r,s}CBmax{r,s}X for all X in S.

Proof. Let t = max{r,s}. x belongs to CBsCBsX iff Bsx BsX, which by the regularity of S implies that Btx BtX, hence CBsCBsX CBtCBtX and CBrCBrCBsCBsX CBrCBrCBtCBtX. By an equivalent argument, CBrCBrCBtCBtX CBtCBtCBtCBtX, and this latter set is CBtCBtX.

Theorem. If S is regular, Hwave(Hwave(X)) = Hwave(X) for all X in S.

Proof. Using the lemmas stated above and de Morgan's Laws, we have Hwave(Hwave(X)) =
= Ur≥0CBrCBrUs≥0CBsCBsX =
= Ur≥0CBrCUs≥0BrCBsCBsX =
Ur≥0Cs≥0BrCBrCBsCBsX =
Ur,s≥0CBmax{r,s}CBmax{r,s}X = Hwave(X).
The converse inclusion is given by property 2 of Hwave.

We define convexity in the customary manner: X is convex if X = Hwave(X). Remember that this definition of convexity does not exactly coincide with the classical one for S = Rn.

Theorem. The following properties hold (proof trivial):
  1. Ø and S are convex.
  2. The intersection of a familiy of convex sets of S is also convex.
  3. (S is regular) Hwave(X) is convex for any arbitrary X in S.

Tuesday, February 12, 2008

Swo extensions and C++

The specification of C++ binary search algorithms use the notion of partition to describe the acceptable algorithm arguments. For instance, the lower_bound algorithm:

template<class ForwardIterator, class T, class Compare>
ForwardIterator lower_bound(
ForwardIterator first, ForwardIterator last,
const T& value, Compare comp);

requires that the elements e of [first,last) be partitioned with respect to the expression comp(e,value), while equal_range:

template<class ForwardIterator, class T, class Compare>
pair<ForwardIterator, ForwardIterator> equal_range(
ForwardIterator first, ForwardIterator last,
const T& value, Compare comp);
requires that elements be partioned by comp(e,value) and by !comp(value,e), and moreover comp(e,value) must imply comp(e,value), i.e. the first partition must be a subset of the second.

Swo extensions and this kind of partitions are closely related:

Definition. Let (A,<A) be a swo and X a subset of A. A subset P of X is said to be a partition of X if for every x in P, P includes every y in X such that not(x <A y).

Theorem. Let (<AB,<BA) be a swo extension of <A to B, X a subset of A and b an element of B. The sets

P = {x in X : x <AB b},
P' = {x in X : not(b <BA x)}

are partitions of X. Moreover, P is a subset of P'.

Proof. Let y such that not(x <A y). By property 2 of the definition of a swo extension, if x <AB b and not(x <A y) then y <AB b. Similarly, using property 3 of the definition of a swo we have that not(b <AB x) and not(x <A y) implies not(b <AB y). The last claim is derived from property 1 of swos.

Theorem. Let (A,<A) be a swo, B disjoint set of A, and P, P' : BP(A) such that P(b) is included in P'(b) and both are partitions of A for every b. We define <AB : AB and <BA : BA as follows:

a <AB b iff a is in P(b),
b <BA a iff a is not in P'(b).

(<AB,<BA) is then a swo extension of <A to B.

Proof. Property 1 of swos is derived from the fact that P(b) is included in P'(b) for every b in B. If not(a <AB b) , that is, a is not in P(b), and not(a' <A a), that fact that P is a partition implies that a' is not in P(b), that is, not(a' <AB b). Finally, if not(b <BA a), that is, a is in P'(b), and not(a <A a'), the fact that P' is a partition implies that a' is in P'(b), that is, not(b <BA a').

In my view, swo extensions allow for a richer mathematical treatment of the subject than a formulations based on partitions, but the latter are undeniably more comprehensible to the layman and so it is no wonder that partitions have been chosen to describe binary search algorithms in C++.

The draft of the upcoming C++0x standard gives the following definition of a partition:

A sequence [start, finish) is partitioned with respect to an expression f(e) if there exists an integer n such that for all 0 <= i < distance(start, finish), f(*(begin + i)) is true if and only if i < n.

At first look, our rendition of the concept of partition seems a univocal translation of the formulation given by the standard draft, but in reality the standard draft formulation is a little more lenient about the treatment of equivalent elements, i.e. elements a, b such that none is less than the other. So, there are C++-partitions that do not qualify as partitions according to our definition.

Friday, February 8, 2008

Bouncing arithmetic

A common visualization of modular arithmetic consists of considering a variable x that monotonically increases mod n: when x is about to reach n, it wraps around to the starting value 0. For instance, the values of x mod 5 are:


What about a variation where x does not wrap around to 0 but instead it bounces and begin decreasing towards 0?


The period of this sequence is 2(n−1). In order to model this "bouncing arithmetic", we need to distinguish between increasing values and decreasing values; we do so by subscripting a + or − symbol, respectively:


Note that 0+ = 0 and (n−1)+ = (n−1). It is entirely trivial to lay out the foundations of bouncing arithmetic by mapping them to a suitable representation in modular arithmetic:

(a = b bounce n) := (a = b mod 2(n-1)),
a+ := a bounce n,
a := −a bounce n.

The last two statements introduce the subscripting +− notation. It is easy to verify that these definitions are accurate: for instance, 3 bounce 5 is (the equivalence class of) −3 bounce 5, which is −3 = 5 mod 8, just as it should be (3 goes right after 4). Having learned the nice equivalence between −a and a, we can dispense with the subscripting notation altogether and simply use +a and −a.

Monday, February 4, 2008

The State as a superrational nanny

Consider the following N-player symmetric game where each player is given the option to cooperate or defect: we define ci as 1 if the i-th player cooperates, and 0 if she defects, and her payoff is given by:

ui = −ci + √(∑jcj).

Let M be the number of players who cooperate; the payoff of a cooperative player is then √M − 1, and that of a defecting player is √M. Overall utility is

ui = M(√M − 1) + (NM)√M = NMM,

which is maximized when N = M, i.e. if everybody cooperates. On the other hand, each player is better off defecting, as the individual marginal payoff for contributing is never greater than 1 (this follows from the inequality √(M+1) − √M ≤ 1); but everybody defecting will yield and individual and global payoff of zero, which is the lowest possible outcome, both individually and overall.

This game is an example of the Free Rider Problem. As I see it, it can also be regarded as a model for the economic role of the State as a tax-funded public goods producer: public services (infrastructure, health care, etc.) have a utility which conforms to the Law of Diminishing Returns, but this ineficiency (as compared with the situation where every tax-payer keeps their money) is compensated by the fact that the service is enjoyed by every citizen in a non-exclusive manner.

If players in this game were superrational they would opt for cooperation, and individual and global payoff would indeed be much higher than predicted by classical Game Theory. In a sense, the State (and its tax-enforcing machinery) can be seen then as an agent coercing society into superrationality, which is a Game Theory formulation of the classical social contract. The important question remains of how primitive societies can evolve to self-imposing such social contract, given that superrationality can hardly be attributed to most individuals (hence the existence of law-enforcing mechanisms, and even Law itself). A rather disturbing explanation is that a society can only be forced into entering the social contract by a benevolent dictator; once the social contract has been set in place, the system itself caters for its own preservation, so that formal dictatorship is no longer required.