Tuesday, September 30, 2008

Mass exhalation

How much bodily mass do we give out through breathing? The chemical reaction for aerobic respiration is

C6H12O6 + 6O2 → 6CO2 + 6H2O

The amount of carbon dioxide we exhale is then

6 mol CO2 / mol glucose = 1.47 g CO2 / g glucose.

Processing of fatty acids yields a different amount of exhaled CO2. For instance, the oxidation of a saturated fatty acid is expressed as

CnH2nO2 + (3n/2 − 1) O2n CO2 + n H2O

yielding

44n/(14n + 32) g CO2 / g CnH2nO2.

This last expression also serves approximately for unsaturated fatty acids, since these are missing only a few lightweight hydrogen atoms with respect to the saturated acid with the same amount of carbon. Assuming most consumed fat is in the form of palmitate, we have a carbon dioxide yield of

2.75 g CO2 / g fat;

Carbohydrates produce 4 kcal per gram, whereas fatty acid processing yields 9 kcal per gram of fat. While at rest, the human body takes 30% of the energy from carbohydrates and 70% from fat; so, this results in a combined production of carbon dioxide of

0.324 g CO2 / kcal.

For a daily energy consumption rate around 2,500 kcal, 810 grams of carbon dioxide are then expelled through breathing. There is a experimental study by the USDA on the subject that unfortunately I haven't been able to find, but some sources cite it as reporting a value beween 445 and 450 liters of CO2 (881-891 g) per day, which is in good agreement with our calculations.

The value calculated is not the answer to our original question about exhaled bodily mass: much of the oxygen in the CO2 produced comes from the air rather than catabolized substances, specially in the case of fat. We will assume that all the oxygen coming from the burned substance goes to CO2: under these conditions, the amount of bodily mass expelled through breathing is

0.933 g / g glucose,
0.875 g / g fat,

or, using the assumed ratios of carbohydrate and fat consumption,

0.138 g / kcal,

which yields 345 g for a typical daily energy consumption of 2,500 calories.

Friday, September 26, 2008

Generating permutation covers: part I

We investigate ways of efficiently generating minimal permutation covers for a set S of N elements. As we have already proved, such covers have size C(N,floor(N/2)). The following intermediate results reduce the problem to simpler terms.

Theorem. Assume that N is even and greater than zero. Let a be an an arbitrary element of S and Σ a minimal permutation cover of S − {a}. The set Σ' defined as

Σ' = {, σa : σ in Σ}

is a minimal permutation cover of S.

Proof. Σ' is a permutation cover: let X be a subset of S; if a is not in X, X is covered by some permutation σ of Σ and a fortiori by σa in Σ'; if a belongs to X, some σ in Σ covers X − {a} and then in Σ' covers X. Σ' is minimal as |Σ'| = 2|Σ| = 2 C(N − 1, floor((N − 1)/2)) = 2 C(N − 1, N/2 − 1) = C(N,N/2) = C(N,floor(N/2)), where the equalities depend on basic properties of binomial coefficients and the fact that N is even.

This result reduces the problem to N odd. The Hasse diagram of the powerset of S when N is odd looks like this (the particular case depicted is N = 5):

We name the layers of the diagram S0,...,SN. Si consists of the subsets of S with cardinality i. We have |Si| = C(N,i). A permutation covers exactly one element of each Si. Similarly, a tuple τ of n non-repeating elements covers exactly one element of each of S0,...,Sn. So, a minimal tuple cover of S0,...,Sn can be proved to have as many elements as the largest layer covered. The next result shows how to extend a minimal tuple cover of S0,...,S(N − 1)/2 , which has cardinality C(N,(N − 1)/2) = C(N,floor(N/2)) and spreads over the bottom half of the powerset of S, to a minimal permutation cover of S.

Theorem. Assume that N is odd and set M = (N − 1)/2. Let Τ be a minimal set of M-tuples of non-repeating elements covering S0 U ... U SM, and d : SMSM a bijection with Xd(X) = Ø for all X in SM. The set Σ defined as

Σ = { τ·a·reverse(τ') : τ,τ' belong to Τ, range(τ') = d(range(τ)), a = S − range(τ) − range(τ')}

is a minimal permutation cover of S.

Proof. We see first that to each τ in Τ there corresponds exactly one permutation στ of the form τ·a·reverse(τ') described above: since Τ is minimal each element of SM is covered by one and only one element of Τ; so, for each τ there is exactly one τ' covering d(range(τ)). We prove now that Σ covers S: let X be a subset of S; if |X| ≤ M, X is covered by some τ in Τ and hence also by the associated στ; if |X| > M, the set Y = SX has |Y| ≤ M and is thus covered by some τ' in Τ; as we have seen, τ' univocally determines a permutation σ = τ·a·reverse(τ'), and σ obviously covers X, as all the elements of Y are packed at the tail of the permutation. Finally, Σ is minimal, since |Σ|= |Τ| and we saw before that |Τ| = C(N,floor(N/2)).

The mapping τστ just defined can be described in a more algorithmic fashion as follows:

  1. Find the τ' in Τ covering d(range(τ)).
  2. Let a be the only element of S not present in τ nor τ'.
  3. στ := τ·a·reverse(τ').

Note that the last theorem extends easily to N even: in this case, the permutations have the form τ·reverse(τ'), i.e. there is no middle element a. However, it is more efficient to reduce the case N even to N − 1 using the first theorem of this entry.

Now we are left with the tasks of designing an algorithm to generate a tuple set Τ covering S0 U ... U SM and finding a suitable bijection d : SMSM.

Monday, September 22, 2008

Profiling stable_vector

The two random access containers of the C++ standard library, std::vector and std::deque, have the same big-O complexity as stable_vector for the following operations:

  • push_back: amortized constant
  • middle insert: O(n)
  • operator[]: constant

Yet the differences in actual performance vary wildly. I have written a profile program that measures the time taken by the three containers at these operations for two different value types: int and a "heavy" user defined type bigobj holding an int and an std::string. The program was built with release mode settings with MSVC++ 8.0 and GNU GCC 3.4.4 for Cygwin/MinGW. Resulting times are summarized in the following table: values are normalized to the minimum among the three containers.

MSVC++ 8.0

int
bigobj

std::
vector

std::
deque

stable_
vector

std::
vector

std::
deque

stable_
vector

push_back
1.00
4.0621.37
1.001.09
1.26
insert
1.00
4.245.29
1.50
1.73
1.00
operator[]
1.00
3.35
1.01
1.00
3.27
1.09
GNU GCC 3.4.4

int
bigobj

std::
vector

std::
deque

stable_
vector

std::
vector

std::
deque

stable_
vector

push_back
1.09
1.00
91.57
1.08
1.00
1.14
insert
1.00
1.30
5.17
21.38
21.36
1.00
operator[]
1.00
41.04
1.11
1.00
81.68
2.03

For small objects like int, std::vector is, as expected, the fastest or very close to the fastest at every operation. push_back takes much longer for stable_vector than std::deque presumably because the latter allocates element chunks thus amortizing allocation time whereas stable_vector performs an allocation for every new node. The poor performance of stable_vector at insert can be attributed to the operation of adjustment of "up" pointers with each insertion operation, which is considerably slower than the simple int copying incurred by the other two containers. Finally, note that random access is very slow for std::deque because it involves some non-trivial arithmetical calculations, whereas std::vector and stable_vector only perform simple pointer offsets and indirections.

When storing heavy objects with expensive copy, the situation changes somewhat. The fact that stable_vector does not internally copy elements upon resizing or shrinking improves its relative performance with respect to the other two containers at push_back, and specially at insert, where stable_vector is the fastest.

This basic analysis suggests that the main performance handicap of stable_vector stems from the fact that it is a node-based container, as opposed to the chunk-based nature of std::vector and std::deque. This drawback might be alleviated by the use of burst-allocation, as proposed by Ion Gaztañaga.

Thursday, September 18, 2008

Too powerful to evolve

Paragraph 1.4.8 of the C++ standard (draft copy) states that

A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program.

It turns out that, strictly speaking, this unexpectedly bans many possible extensions an implementation of the C++ standard library might provide; C++ is such a powerful beast that a well-formed program might be made to depend on the non-existence of certain entities.

Function and function templates. The following program

‎#include <algorithm>

‎using namespace std;

‎int any_of(int count,int n1,...){/*...*/}

‎int main()
‎{
‎ int n=any_of(2,1023,233);
‎}

is valid C++03, but will fail with the upcoming C++0x revision of the standard, as this includes a function template any_of within <algorithm>:

‎template <class InputIterator, class Predicate>
‎bool any_of(InputIterator first, InputIterator last, Predicate pred);

that takes precedence over the user defined any_of due to the using namespace std directive.

Classes and class templates. Similarly, this C++03 valid program

‎#include <functional>

‎using namespace std;

‎template<typename T>
‎struct hash
‎{
‎ size_t operator()(const T& t){/*...*/}
‎};

‎int main()
‎{
‎ hash<int> h;
‎ size_t s=h(983265);
‎}

will fail with C++0x due to the presence of an homonym hash in <functional>.

Member functions. The previous situations can be avoided if non-standard extensions are declared in separate headers or belong to a namespace other than std. These measures cannot be taken, however, when we are adding members to a standard class. The following is a totally artificial yet valid C++03 program:

‎#include <vector>

‎template<class Vector>
‎int foo(char x){/*...*/}

‎template <typename T,void (T::*)()> struct mfp{};

‎template<class Vector>
‎void* foo(int x,mfp<Vector,&Vector::shrink_to_fit>* =0){/*...*/}

‎int main()
‎{
‎ int x=foo<std::vector<int> >(0);
‎}

The so-called SFINAE rule prevents the second overload of foo to be selected because vector has no member function called shrink_to_fit; but it does in C++0x, rendering the program invalid (the second foo return type is not convertible to int).

Of course this is not a serious objection to the C++ standard text: 1.4.8 could always be rephrased to restrict it to reasonable well-formed programs, for some definition of "reasonable". But it can be instructive to reflect on how C++ has become so powerful (some would say complex) that such devious tricks can emerge (and, in some cases, be benefited from).

Sunday, September 14, 2008

A causality calculus

Let us design a first-order calculus including causality as non-truth-functional connective for which causality semantics is defined at the same level as the rest of the (truth-functional) connectives, according to the philosophical position stated in a previous entry. We assume we are given an infinite set of variables V and a set R of predicate symbols with associated arities.

Propositional constants: F (falsehood).

Connectives: → (only if), > (causal implication).

Quantifiers: V (for all).

Axioms and rules of inference: the usual ones for standard first-order calculus plus the following axiom schemas dealing with causality:

(C1) p > q, q > rp > r

(C2) (p > p) → F

(C3) p > q → (p and q)

(C4) p > r → (p or q) > r

(C5) p > (q and r) → p > q

These axioms deserve some comment: C1 states the usual transitivity of causation, while C2 says that no event causes itself; C3 allows us to speak about causation only among events actually taking place, which makes this a "realist" semantics. C4 and C5 translate some usual properties of material implication to causal implication.

Semantics: a valuation v specifies several mappings:

  • From V to some set U.
  • From each n-ary predicate in R to some n-ary relation in U.
  • From each formula to the set {F,T}.

and obeys the following rules:

  • v(F) = F.
  • v(φψ) = T iff v(φ) = F or v(ψ) = T.
  • v(R(t1,...,tn)) = T iff (v(t1),...,v(tn)) is in v(R).
  • v(Vx φ) = T iff v'(φ) = T for every valuation v' identical to v except possibly at x.
  • The relation on formulas of the calculus defined by v(φ > ψ) = T is consistent with axiom schemas C1,..., C5. In particular, this implies that the relation thus defined is a strict partial order.

As usual, a statement is valid if it is true for any valuation. It is easy to see that there are valuations with the properties described above: just take any valuation for standard first-order predicate calculus and augment it with v(φ > ψ) = F for all φ, ψ (we may call this a Humean valuation).

Wednesday, September 10, 2008

Introducing stable_vector

We present stable_vector, a fully STL-compliant stable container that provides most of the features of std::vector except element contiguity. The underlying data structure has been discussed in a prior entry.

In the following we are using the terminology of the C++98 standard.

General properties. stable_vector satisfies all the requirements of a container, a reversible container and a sequence and provides all the optional operations present in std::vector. Like std::vector, iterators are random access. stable_vector does not provide element contiguity; in exchange for this absence, the container is stable, i.e. references and iterators to an element of a stable_vector remain valid as long as the element is not erased, and an iterator that has been assigned the return value of end() always remain valid until the destruction of the associated stable_vector.

Operation complexity. The big-O complexities of stable_vector operations match exactly those of std::vector. In general, insertion/deletion is constant time at the end of the sequence and linear elsewhere. Unlike std::vector, stable_vector does not internally perform any value_type destruction, copy or assignment operations other than those exactly corresponding to the insertion of new elements or deletion of stored elements, which can sometimes compensate in terms of performance for the extra burden of doing more pointer manipulation and an additional allocation per element.

Exception safety. (according to Abrahams' terminology) As stable_vector does not internally copy elements around, some operations provide stronger exception safety guarantees than in std::vector:

operationexception safety for
std::vector<T>
exception safety for
stable_vector<T>
insertstrong unless copy or assignment of T throw (basic)
strong
eraseno-throw unless copy or assignment of T throw (basic)
no-throw

Memory overhead. The C++ standard does not specifiy requirements on memory consumption, but virtually any implementation of std::vector has the same behavior wih respect to memory usage: the memory allocated by a vector v with n elements of type T is

mv = c·e,

where c is v.capacity() and e is sizeof(T). c can be as low as n if the user has explicitly reserved the exact capacity required; otherwise, the average value c for a growing vector oscillates between 1.25·n and 1.5·n for typical resizing policies. For stable_vector, the memory usage is

msv = (c + 1)p + (n + 1)(e + p),

where p is the size of a pointer. We have c + 1 and n + 1 rather than c and n because a dummy node is needed at the end of the sequence. If we call f the capacity to size ratio c/n and assume that n is large enough, we have that

msv/mv ≈ (fp + e + p)/fe.

So, stable_vector uses less memory than std::vector only when e > p and the capacity to size ratio exceeds a given threshold:

msv < mvf > (e + p)/(ep). (provided e > p)

This threshold approaches typical values of f below 1.5 when e > 5p; in a 32-bit architecture, when e > 20 bytes.

Summary. stable_vector is a drop-in replacement for std::vector providing stability of references and iterators, in exchange for missing element contiguity and also some performance and memory overhead. When the element objects are expensive to move around, the performance overhead can turn into a net performance gain for stable_vector if many middle insertions or deletions are performed or if resizing is very frequent. Similarly, if the elements are large there are situations when the memory used by stable_vector can actually be less than required by std::vector.

Saturday, September 6, 2008

Causality as a primitive concept

The supposedly metaphysical nature of causality, according to such radical empiricists as Hume, stems from the fact that there cannot be a direct perception of a causal connection between events: perception of causes and effects is indistinguishable from perception of unrelated events that happen to be contiguous in space and time. This non-commitmental position is, however, untenable, as causality is absolutely central to any theory admitting the existence of an objective world: it is only by postulating a cause-effect relation between external events and our internal perceptions of them that we can escape solipsism.

It is my take then that causality, at least within the context of the human perception of events, can be seen as a primitive concept at the same level as event occurrence; saying "A causes B" is as basic a statement as "It is the case that C", because causality can only be explained in terms of event occurrence, and any account of an event occurrence depends on the implied causality connecting events and their perception. Formal renditions of causality, however, propose modal-like semantics denying this concept its primitiveness, which adds to the methaphysical aura sourrounding the subject: while the truth or falsity of a statement in a logical calculus is directly represented by the models of the calculus (cf. first-order predicate calculus semantics), causality semantics rely on more complex structures (vg. possible worlds) that build on "basic" models.

What would a formalization look like in which the truth of a statement and the causal connection between two statements are modeled at the same basic level? We will explore this approach in a later entry.

Tuesday, September 2, 2008

A combinatory theorem

Definition. Let S be a finite set of cardinality N, σ = (σ1,...,σN) a permutation of the elements of S and X a subset of S. We say the σ prefix-covers (or simply covers) X if X is one of Ø, {σ1}, {σ1,σ2}, ... , {σ1,...,σM}.

For instance, if S = {a,b,c,d} and σ = (b,d,a,c), the following subsets of S are covered by σ: Ø, {b}, {b,d}, {a,b,d}, {a,b,c,d} = S.

In the following we use the notation C(a,b) for the binomial coefficient a choose b.

Theorem. C(N,floor(N/2)) is the minimum number of distinct permutations of S jointly covering all the subsets of S.

Proof. Let P be the power set of S partially ordered by inclusion. The sets covered by a permutation of S form a maximal chain of P, and this correspondence between permutations and maximal chains is clearly one-to-one. So, the minimum number of permutations of S required to cover S is the same as the minimum number of chains partitioning P. By Dilworth's Theorem, this number is equal to the partial order width of P, that Sperner's Theorem shows to be C(N,floor(N/2)).

The first values of C(n,floor(n/2)) are: 1, 1, 2, 3, 6, 10, 20, 35, 70, 126... The following table shows some possible permutation covers for |S| = 1,...,5.

Spermutation cover
{a}(a)
{a,b}(ab), (ba)
{a,b,c}(abc), (bca), (cab)
{a,b,c,d}(abcd), (acbd), (bcda),
(bdac), (cdab), (dabc)
{a,b,c,d,e}(abcde), (bcdea), (cdeab), (deabc), (eabcd),
(acdbe), (adbce), (bdeac), (becad), (ceabd)

The proof of the theorem does not provide an efficient algorithm to construct a permutation cover. Finding such an algorithm is an interesting problem that we will address in a later entry.