Friday, October 31, 2008

C preprocessor tail recursion

The C preprocessor gives the false impression of being functional in nature, but its capacities are more limited than expected; for instance, function reentrancy does not work: consider the following attempt at recursively defining the factorial function using Boost.Preprocessor:

‎#define FACTORIAL(n) \
‎BOOST_PP_IF( \
‎ n, \
‎ BOOST_PP_MUL(n,FACTORIAL(BOOST_PP_DEC(n))), \
‎ 1)
The expression FACTORIAL(5) does not expand to 120, however, but to
‎BOOST_PP_TUPLE_ELEM_3_0 BOOST_PP_IIF_BOOST_PP_BOOL_FACTORIAL(4)(
‎BOOST_PP_WHILE_2, (0, 5, FACTORIAL(4)) BOOST_PP_TUPLE_EAT_3)(BOO
‎ST_PP_MUL_P, BOOST_PP_MUL_O, (5, 5, BOOST_PP_DEC_FACTORIAL(4)))

The rules of the C preprocessor mandate that during the expansion of the function macro FACTORIAL internal occurrences of this token are not further expanded, thus ruling out the very core of recursion. A simpler example illustrates this:

‎#define A B
‎#define B A

‎A
The last line of the snippet above does not enter into infinite recursion, but simply causes the expansion rules ABA to be applied, stopping here because A is met inside the expansion of A itself.

Despite these serious limitations, Boost.Preprocessor is able to provide constructs for bounded iteration implemented with very sophisticated techniques. Let us see how to use BOOST_PP_WHILE to simulate tail recursion, a special form of recursion in which recursivity is applied exactly at the end of the recursive function body. The usual formulation of the factorial function is not tail recursive:

‎int factorial(int n)
‎{
‎ if(n>0)return n*factorial(n-1);
‎ else return 1;
‎}
But it can be rewritten to be so as follows:
‎int factorial(int n,int res=1)
‎{
‎ if(n>0)return factorial(n-1,n*res);
‎ else return res;
}

The idea is that the recursive function passes all the relevant contextual information as part of the recursive call, thus delegating the work that otherwise should be done after the recursion returns; in this way, recursing (except when terminating recursion) is exactly the last thing that the function does. Let us know apply the following trasformation ff' to a tail recursive function f:

  • If f stops and returns res, f' returns the sequence (stop,res).
  • If f recursively calls f(args), f' returns the sequence (iterate,args).

That is, f' does not recurse but instead returns the computation state if recursion is to occur, or the final return value otherwise. The following pseudocode illustrates the transformation process for factorial:

‎factorial_body(int n,int res=1)
‎{
‎ if(n>0)return (iterate,n-1,n*res);
‎ else return (stop,res);
‎}
f' is no longer recursive, but it can be used to compute f as follows:
‎recurse(body,args)
‎{
‎ state=(iterate,args);
‎ while(state[0]!=stop){
‎ state=body(state.args);
‎ }
‎}
The interesting fact about this process is that recursion (and so reentrancy) has been eliminated in favor of iteration: f' is called repeatedly until it signals termination. We can very easily implement this idea using Boost.Preprocessor:
‎#define RECURSE(body,args) \
‎RECURSE_RES(BOOST_PP_WHILE(RECURSE_PRED,RECURSE_OP,(body)ITERATE args))

‎#define RECURSE_PRED(d,state) \
‎BOOST_PP_SEQ_ELEM(1,state)

‎#define RECURSE_OP(d,state) \
‎(RECURSE_BODY(state))RECURSE_BODY(state)(BOOST_PP_SEQ_REST_N(2,state))

‎#define RECURSE_BODY(state) \
‎BOOST_PP_SEQ_ELEM(0,state)

‎#define RECURSE_RES(state) \
‎BOOST_PP_SEQ_ELEM(2,state)

‎#define STOP (0)

‎#define ITERATE (1)
which allows us to implement our factorial macro in a reasonably straightforward manner:
‎‎#define FACTORIAL(n) RECURSE(FACTORIAL_BODY,(n)(1))

‎#define FACTORIAL_BODY(args) \
‎FACTORIAL_BODY_IMPL( \
‎ BOOST_PP_SEQ_ELEM(0,args),BOOST_PP_SEQ_ELEM(1,args))

‎#define FACTORIAL_BODY_IMPL(n,res) \
‎BOOST_PP_IF( \
‎ n, \
‎ ITERATE(BOOST_PP_DEC(n))(BOOST_PP_MUL(n,res)), \
‎ STOP(res))
Arguments are packed using Boost.Preprocessor sequences. A complete implementation program is provided. Unfortunately, the process seems to be very onerous for common implementations of the C preprocessor, as internal limits are generally reached with arguments greater than 5. This problem could deserve further investigation in a later entry.

Monday, October 27, 2008

Law-abiding miracles

Newton believed that, if left to the action of gravitational forces alone, the Solar System must be unstable, and only the continuous action of God, counteracting the small perturbations here and there, keeps the system together and wonderfully regular:

The six primary planets are revolved about the sun in circles concentric with the sun, and with motions directed towards the same parts, and almost in the same plane. Ten moons are revolved about the earth, Jupiter and Saturn, in circles concentric with them, with the same direction of motion, and nearly in the planes of the orbits of those planets; but it is not to be conceived that mere mechanical causes could give birth to so many regular motions[...] This most beautiful system of the sun, planets, and comets could only proceed from the counsel and dominion of an intelligent and powerful Being.

This curious picture of God as a careful supervisor of heavenly motions would only be compatible with the same physical laws that Newton crafted if God Himself would possess a physical body with mass and energy (and large quantities of them, for that matter): classical (and relativistic) mechanics is deterministic and does not allow for any degree of freedom that a deity could take advantage of to direct the evolution of a physical system one way or another. In this sense, the action of God as a stabilizer of the Solar System is a sustained miracle, a physical phenomenon incompatible with the laws of physics. This very definition of miracle is logically contradictory to begin with, at least given the premise of a deterministic universe.

On the other hand, if we consider non-deterministic physical theories, there is room for a concept that we could aptly name "law-abiding miracle": for instance, quantum mechanics tunneling allows for the occurence of phenomena that we generally deem impossible, such as wall crossing and other ghostly effects, albeit the probability of their occurrence at a macroscopic level is exceedingly small. So small in fact that it is not reasonable to expect that a macroscopic tunneling event takes place during the entire lifetime of the Universe, let alone in the presence of humans. But unlikely is not impossible: a quantum god can in principle miraculously interfere with the business of the physical world without violating the rules that govern it.

There is, nonetheless, some degree of circularity with this notion of law-abiding miracles within a probabilistic physical theory. If the frequency of such miracles is too high (as it would be the case with the scenario envisioned by Newton of a solar system being constantly tweaked to keep it stable), such divine interventions would show up in the statistical calculations made by physicists when experimentally testing the theory, rendering results highly inconsistent with the theoretical predictions; and this can in the end falsify the theory. The insistence of a deity to meddle in the normal course of events would block humans from acquiring knowledge of the laws of Nature.

Thursday, October 23, 2008

Circles, ellipses, assistants and employees

The well-known Circle-ellipse Problem states that, in the world of OO, a circle is not an ellipse because both entities do not behave the same: for instance, one can arbitrarily stretch an ellipse, but stretching a circle would break the essential property of this shape, namely that all its points are at the same distance from the center. On the other hand, other subtype relationships perfectly match the real-life scenarios they model: for instance, an assistant is certainly a kind of employee, and so an associated Assistant type can be modeled as a subtype of Employee without any problem whatsoever. What is the difference between these two modelizations that make the latter succeed where the former fails?

We introduce notation to talk about some concepts around the notion of "object" in OO. An object x has a definite lifetime T(x) = [t0,t1] during which the internal state of the object might (or might not) change according to the operations performed on it. We denote by S(x,t) the state of x at time t. Assume that we have a type Ellipse modelling the mathematical concept of ellipse, which we can identify by the set E containing all the ellipses in R2: we intuitively expect from Ellipse that the states of objects of this type can be univocally mapped to elements of E:

for all x of type Ellipse, t in T(x), there is an ellipse e in E such that e ~ S(x,t),

or, put more succintly, we can say that

the states of Ellipse objects are ellipses (modulo isomorphism).

Similarly, the states of Circle objects are circles. A Circle cannot be a subtype of Ellipse because, according to Liskov Principle, a Circle does not behave as an Ellipse: there are mutable operations on Ellipse (like the streching example mentioned at the beginning) such that the final state of the modifed object can never be a circle, so we cannot provide an equivalent operation for Circle.

Now, suppose we have types Employee and Assistant: here the subtyping relationship works because all the state changes of an employee (salary raise, relocation, etc.) are compatible with the state changes of an assistant. And this points right to the crux of the matter: the real-life "employee" concept refer to a mutable entity whose state can change as part of the interactions of the employee with the rest of the company; Employee models this concept by making the object state changes reflect the changes of the modelled employee:

the states of Employee objects are employee states.

Confront this with

the states of Ellipse objects are ellipses.

Ellipse states are ellipses, not ellipse states (ellipses do not have any mutable state). This is why the the subtypying relationship Ellipse/Circle does not work while Employee/Assistant does. In general, for a concept T < T' relationship to be representable by types T < T', the states of T and T' objects must model states of T and T' entities, respectively.

Monday, October 20, 2008

Multicolumn querying

Suppose we have a database table equipped with a multicolumn (also called concatenated or complex) index like this:

‎CREATE TABLE user (last_name varchar, first_name varchar)
‎CREATE INDEX user_index ON user (last_name, first_name)
Most platforms take advantage of user_index in queries involving both user attributes or just last_name:
‎SELECT * from user WHERE first_name="Robert" AND last_name="Grant";
‎SELECT * from user WHERE last_name="Smith";
while, on the other hand, queries like the following:
‎SELECT * from user WHERE first_name="Mary";

trigger a full table scan or use the index in a suboptimal fashion. The simple reason behind this is that database indexes are usually implemented with B-trees or similar structures that sort the data by the values of the indexed column: for a multicolumn index on (c1,...,cn), sorting is performed according to the lexicographical order associated to the n columns, with ci taking precedence over cj if i < j. So, in our example the index sorts the rows by last_name and, within clusters with equal last_name, data is further sorted by first_name. This explains why querying on last_name benefits from the index (rows are sorted by the first attribute to begin with) while querying on first_name does not. Database designers learn to carefully consider the column specification order when creating multicolumn indexes.

So, in general, an index on (c1,...,cn) serves to accelerate queries of the form

c1 = v1 AND c2 = v2 AND ··· AND ci = vi

for all in. The question arises: how many multicolumn indexes, and how, must be created so that all possible attribute combinations are covered by some index? For instance, if we have three attributes:

‎CREATE TABLE user (last_name varchar, first_name varchar, age integer)

three indexes suffice:

‎CREATE INDEX user_index1 ON user (last_name, first_name, age)
‎CREATE INDEX user_index2 ON user (first_name, last_name, age)
‎CREATE INDEX user_index3 ON user (age, last_name, first_name)

(You might find it entertaining to check out manually that all the 7 different attribute combinations are covered.) It turns out that for n attributes, which generate 2n − 1 combinations, the minimum number of indexes needed is

C(n,floor(n/2)) = n!/(ceil(n/2)!·floor(n/2)!),

where C(a,b) denotes the binomial coefficient a choose b. The values of C(n,floor(n/2)) for n = 1,2,... are: 1, 2, 3, 6, 10, 20, 35, 70, 126,... Using Stirling's approximation, the sequence can be proved to be O(2n/√n). The mathematical formulation of this problem has been studied in a previous entry. Also, an algorithm for generating the index set is provided.

Thursday, October 16, 2008

On necessity in McGinn

Colin McGinn challenges in his book Logical Properties the standard treatment of modality and proposes some rather provocative and unorthodox ideas on the nature of necessity and associated modal notions. There is much to argue about McGinn's positions, but I would like to focus on three particular issues:

The main objection of McGinn to the possible world semantics is that this approach reduces modal notions to quantifications over universes of possible worlds, and whether a world is possible or not is already dependent on the modal concept of possibility, thus engendering a circular reasoning. As I see it, the flaw in this argument stems from the wrong assumption that characterizing a world W as possible is a modal business; although McGinn does not put it explicitly, he seems to think that the statement "W is a possible world" is equivalent to something like "possibly W exists". But actually there is nothing modal about the concept of "possible world": a possible world is just a consistent (i.e. logically non-contradictory) state of affairs, of which the actual world we live in is just an example. Rephrase the whole possible worlds semantics formulation replacing "possible world" with "consistent world" and all the apparent circularities vanish away.

Later on, McGinn proposes a copula modifier theory to explain modality: basically, this theory contends that modal notions qualify the way that objects hold properties. So, "Socrates must be a man" means that Socrates holds the property of being a man in the mode of necessity. McGinn immediately realizes that adscribing modality to the copula of statements cannot cover uses of modality qualifying entire statements, as in "it is impossible that 2 + 2 = 5" (which, incidentally, is how modality operators work in formal modal logic). He solves the problem by postulating that in these cases the missing copula to qualify is that adscribing the truth property to the proposition at hand: so "it is impossible that 2 + 2 = 5" can be rephrased as "the proposition '2 + 2 = 5' cannot be true", which, according to the copula modifier theory, means that "2 + 2 = 5" holds the property of being true in the mode of impossibility. In the process, McGinn has forced us to accept propositions as first-class entities to talk about, an ontological extension that some (notably Quine) reject commiting to. Even admitting this unexpected guest, it is not clear to me how this theory sheds any light on the nature of modality: one can as well go the reverse way and formulate predicate-qualifying modal sentences as sentence-level modal utterances: for instance, "Socrates must be a man" is just a way of saying "necessarily Socrates is a man", which process does not involve the introduction of propositions and the "truth predicate". Why McGinn prefers one rewriting rule to the converse is unknown to me --furthermore, McGinn's formulation poses some perplexities when iterated modality is considered, as he readily acknowledges.

Finally, having stated that modality affects the way objects hold properties, McGinn renounces to further analyze what the exact nature of this relationship between objects and properties is. He gives in passing a Tarski-style semantic formulation of modality along the lines

x satisfies 'is necessarily F' iff x necessarily satisfies 'F',

only to admit that this does not add anything to our understanding of modality. Modal truth is left as a sort of primitive, epiphenomenal concept for which even a special entity status is set up within the confines of McGinn's exuberant ontology. It is difficult to see how this situation represents an improvement over the standard modal theory.

Sunday, October 12, 2008

Points inside a polygon and the Residue Theorem

A usual way to determine whether a 2D point p lies inside a simple polygon P consists in tracing a ray from p to infinity and computing how many times this ray intersects P: p is inside P iff the number of intersections is odd. We investigate a different technique for solving the problem based on complex analysis, only to finally see that this new method is really equivalent to the classical ray algorithm.

We consider p and P to be represented in the complex plane and assume, without loss of generality, that p = 0. The Residue Theorem tells us that

P (1/z) dz

is null if the point 0 (the only pole of 1/z) is outside P, and

2πi Resz=0(1/z) = 2πi

if 0 lies inside P. So we need only evaluate the integral P (1/z) dz; as P is a loop sequence of segments going counterclockwise like this

P = (p1,p2), (p2,p3), ... , (pN−1,pN), (pN,p1),

and ln z is an antiderivative of 1/z, we have that

P (1/z) dz = (pi,pi+1) (1/z) dz =
= (ln pi+1 − ln pi).

But now, this sum seems to be unconditionally null:

(ln pi+1 − ln pi) =
= ln p2 − ln p1 + ln p3 − ln p2 + ln p4 − ln p3 + ... + ln p1 − ln pN = 0.

Each term is cancelled out by some other with different sign, so the total sum is, or seems to be, zero! Obviously we have made some mistake during the process, and in fact the wrong assumption is this: the equality

(pi,pi+1) (1/z) dz = ln pi+1 − ln pi

only holds if the derivative of ln z is 1/z along all the segment (pi,pi+1) (more precisely, at a region containing the segment): but ln z, or, strictly speaking, the principal branch of ln z, has a cut at the negative real axis, so the equality above fails to hold when (pi,pi+1) intersects this cut. When this is the case, we can evaluate the integral by using a different branch of the complex logarithm so that (pi,pi+1) does not intersect the new cut. It is not hard to see that doing so yields

(pi,pi+1) (1/z) dz = ln pi+1 − ln pi + σ2πi,

where ln denotes again the principal branch of the complex logarithm and σ is +1 or −1 depending on whether (pi,pi+1) crosses the negative real axis southwards or northwards. Having the situation with the negative real axis covered, we can easily deduce that

P (1/z) dz =(sn)2πi,

where s is the number of segments of P crossing the negative real axis southwards, and n corresponds to those crossing northwards. And, in the end, this is but a simple reformulation of the old ray-based algorithm.

Wednesday, October 8, 2008

Generating permutation covers: part II

In a prior entry, we reduced the problem of generating a minimal permutation cover of a set S of N elements to that of constructing a minimal tuple cover of S0 U ... U SM, where Si is the set of all subsets of S with cardinality i, M = (N − 1)/2 and N is odd. We do this construction recursively on m = 0,...,M.

Suppose then that we have a minimal tuple cover T of S0 U ... U Sm, i.e. a minimal set of m-tuples of different elements jointly covering all subsets of S with cardinality ≤ m, and we want to extend T to a minimal set T' of (m + 1)-tuples jointly covering S0 U ... U Sm U Sm + 1. We state without proof the following

Lemma. If mN/2, a miminal tuple cover of S0 U ... U Sm has cardinality C(N,m).

So, |T| = C(N,m) and we want to extend T to a set T' by assigning to each tuple τ of T one or more tuples of the form τa, a in S − range(τ), in such a way that all subsets of Sm + 1 are covered and the final number of elements of the cover |T'| is C(N,m + 1).

When extending a tuple τ of T we will disregard the order of its elements, so basically we are identifying τ with range(τ); this reduction still yields C(N,m) different elements, since every element of Sm must be covered by some tuple of T. So the extension mapping T T' relates elements of Sm to elements of Sm + 1, and in fact can be regarded as a subset of the graph induced by inclusion between elements of these two sets.

It is easy to see that each element of Sm is the source of (Nm) arrows, whereas m + 1 arrows arrive at each element of Sm + 1. Our job is then to select C(N,m + 1) arrows from the diagram above so that all the elements from the source and the destination set are met by at least one arrow. This selection process has to maintain some balance so that no source or destination element is left unattended: for instance, if we select for each element of Sm + 1 an arbitrary arrow arriving at it there is the possibility that some elements of Sm are not visited by any of the selected arrows. The following is a balanced selection criterion: given a fixed injection f : SmSm + 1 and function g : Sm + 1Sm, both compatible with the inclusion relationship between Sm and Sm + 1 (that is, X is a subset of f(X) and Y a superset of g(Y)), we select an arrow XY iff

Y = f(X) or (X = g(Y) and Y is not in f(Sm)).

It is obvious that this criterion does not leave any X or Y unvisited. The number of selected arrows coincides with the number of elements in Sm + 1, which is C(N,m + 1) as required. In the following we suppose, without loss of generality, that S is the set {0,...,N − 1}. Constructively defining an inclusion compatible injection from Sm to Sm + 1 is not a trivial task, but fortunately for us a paper from Pudlák, Turzík and Poljak provides the definition of such an injection f along with an algorithm χ : Sm + 1 → {true,false} that checks whether a given Y belongs to f(Sm). We adopt the following definition for g:

g(Y) := Y − max(Y),

which leads to this algorithm for generating Τ' from Τ:

extend(N,Τ)
Τ' ← Ø
for every τ in Τ
····X ← range(τ)
····af(X) − X {the one element added to X by f}
····Τ'Τ' U {τ·a}
····for i = max(X) + 1 to N − 1
········YX U {i}
········if not χ(Y) then
············Τ'Τ' U {τ·i}
········end if
····end for
end for

Note that the double loop over (X,i) is designed in such a way that it only visits the XY arrows where X = g(Y), which is maximally efficient and saves us the need to explicity compute g. The complete Τ0 = Ø → Τ1 → ··· → ΤM process can be inlined to avoid generating the intermediate Τi covers.

tuple-cover(m,N,τ,Τ) {initial call: tuple-cover(0,N,Ø,Τ) with Τ empty}
if m = (N − 1)/2 then
····ΤΤ U {τ}
else
····X ← range(τ)
····af(X) − X
····tuple-cover(m + 1,N,τ·a,Τ)
····for i = max(X) + 1 to N − 1
········YX U {i}
········if not χ(Y) then
············cover(m + 1,N,τ·i,Τ)
········end if
····end for
end if

In order to leverage the tuple cover algorithm to construct a minimal permutation cover, the only missing piece is finding a bijection d : SMSM such that Xd(X) = Ø for all X in SM, as we already saw. The aforementioned paper from Pudlák et al. provides also such a function (which, in fact, is used for the construction of f). Having all the necessary components, the following is the full algorithm for constructing a minimal permutation cover on S = {0,...,N − 1}.

cover(N)
if N is even then
····N'N − 1
else
····N'N
end if
Σ ← Ø
Τ ← Ø
tuple-cover(0,N',Ø,Τ)
for every τ in Τ
····find τ' in Τ with range(τ') = d(range(τ))
····aS − range(τ) − range(τ')
····σ τ·a·reverse(τ')
····if N is even then
········ΣΣ U {N'·σ}
········ΣΣ U {σ·N'}
···· else
········ΣΣ U {σ}
····end if
end for

A C++ implementation of the algorithm is available (Boost used). The following are the different covers generated for |S| = 1,...,8.

|S|permutation cover
1(a)
2(ab), (ba)
3(acb), (bac), (cba)
4(acbd),(dacb),(bacd),(dbac),(cbad),(dcba)
5(acedb),(baedc),(bdaec),(bedca),(cbaed),(cdbae),(cebad),(daceb),(decab),(eadbc)
6(acedbf),(facedb),(baedcf),(fbaedc),(bdaecf),(fbdaec),(bedcaf),(fbedca),(cbaedf),(fcbaed),
(cdbaef),(fcdbae),(cebadf),(fcebad),(dacebf),(fdaceb),(decabf),(fdecab),(eadbcf),(feadbc)
7(acegfdb),(baegfdc),(bdagfec),(bdfagec),(bdgfeca),(bedagfc),(befdcag),
(begdcaf),(bfaegdc),(bfgecad),(bgafced),(cbagfed),(cdbagfe),(cdfbage),
(cdgbafe),(cebagfd),(cefbagd),(cegbafd),(cfbaged),(cfgeadb),(cgbfdae),
(dacgfeb),(decbagf),(defcagb),(degcafb),(dfacgeb),(dfgceab),(dgafbec),
(eadcgfb),(efadbgc),(efgdabc),(egadbfc),(facegdb),(fgaebdc),(gacfdeb)
8(acegfdbh),(hacegfdb),(baegfdch),(hbaegfdc),(bdagfech),(hbdagfec),(bdfagech),
(hbdfagec),(bdgfecah),(hbdgfeca),(bedagfch),(hbedagfc),(befdcagh),(hbefdcag),
(begdcafh),(hbegdcaf),(bfaegdch),(hbfaegdc),(bfgecadh),(hbfgecad),(bgafcedh),
(hbgafced),(cbagfedh),(hcbagfed),(cdbagfeh),(hcdbagfe),(cdfbageh),(hcdfbage),
(cdgbafeh),(hcdgbafe),(cebagfdh),(hcebagfd),(cefbagdh),(hcefbagd),(cegbafdh),
(hcegbafd),(cfbagedh),(hcfbaged),(cfgeadbh),(hcfgeadb),(cgbfdaeh),(hcgbfdae),
(dacgfebh),(hdacgfeb),(decbagfh),(hdecbagf),(defcagbh),(hdefcagb),(degcafbh),
(hdegcafb),(dfacgebh),(hdfacgeb),(dfgceabh),(hdfgceab),(dgafbech),(hdgafbec),
(eadcgfbh),(headcgfb),(efadbgch),(hefadbgc),(efgdabch),(hefgdabc),(egadbfch),
(hegadbfc),(facegdbh),(hfacegdb),(fgaebdch),(hfgaebdc),(gacfdebh),(hgacfdeb)

In a later entry we will see a practical application of permutation covers in the context of database querying.

Saturday, October 4, 2008

User attention sharing

CPU is a precious resource that must be shared between the processes concurrently running on a machine at a given time. Some operating systems of the past implemented this sharing using a technique called collaborative multitasking, which is based on the processes' explicitly relinquishing CPU control to the OS so that this control be passed on to other tasks. This can be very inefficient, as sharing cannot be controlled by the operating system, but depends on the good judgement of third party application programmers; furthermore, a faulty process that fails to yield CPU or hangs will collapse the whole system. So, modern OSs implement preemptive multitasking, in which the system scheduler can give or take CPU control without any special help from the processes. Usual implementations of preemptive multitasking group processes into two categories: IO-bound processes, which consume most of the time waiting for user input or dealing with secondary storage, and CPU-bound processes, which take up as much computing power as they are given. Some scheduling algorithms reward the least CPU-hungry processes by giving them higher priority to regain CPU.

In a desktop machine, user attention is as much of a precious shared resource as CPU is: concurrently living applications try to attract the user focus by displaying some visual effects or directly putting themselves on the forefront. In the early, more naïve days, applications could freely jump into the foreground of their own accord, which could quickly degrade into an all-out fight (specially if malware was involved) for the user's attention, or "attention thrashing", and render the computer unusable. This situation is reminiscent of the first candid collaborative multitasking systems; incidentally, attention thrashing led some OS vendors to change the behavior of some of their APIs to try to restrict the applications' freedom to steal focus from each other, though workarounds abound. In the web browser arena, attention thrashing can be even more daunting, and most users end up entirely blocking pop-ups and other forms of focus stealing.

As I see it, user attention should then be explicitly taken care of by the operating system in the same way as CPU control is; in fact, we can learn from the history of multitasking and move from the current "collaborative" situation to system-controlled scenarios where applications request attention and are given it based on attention availability and some measure of application fairness. So, applications that request attention very rarely and do so for justified motives (where this last attribute can be estimated indirectly by, say, recording the amount of time spent by the user on the application when this pops up) are ranked higher and can be given more leeway to call on the user. On the other hand, noisy applications are rapidly ranked low and will not be able to steal focus unless the user explicitly clicks on them.