## Thursday, September 8, 2016

### Global warming as falling into the Sun

This summer in Spain has been so particularly hot that people came up with graphical jokes like this:
(Cáceres is my hometown; versions of this picture for many other Spanish populations swarm the net.) Pursuing this idea half-seriously, one can reason that an increase in global temperatures due to climate change might be journalistically equated with the Earth getting closer to the Sun and thus receiving more radiation, which analogy conjures up doomy visions of our planet falling into the blazing hell of the star: let us do the calculations.
Climate sensitivity, usually denoted by λ, links changes in global surface temperature with variations of received radiative power
ΔT = λ ΔW.
The mechanism by which radiative power changes (increased albedo, greenhouse effect) results in a different associated λ parameter. For the case of power variations due to changes in solar activity, Tung et. al have calculated λs to be in the range of 0.69 to 0.97 K/(W/m2) using data from observations of 11-year solar cycles, and estimate that the stationary sensitivity (i.e. if the change in power was permanent) would be 1.5 times higher, thus in the range of 1.03 to 1.45 K/(W/m2).
Now, the Earth is D0 = 1.496 × 108 km away from the Sun, and receives an average radiation of W0 = 1366 W/m2. Assuming far-field conditions, the radiative power received at the Earth as a function of the distance D to the Sun is then
W = w / D2,
w = 3.057 × 1025 W/sr,
which allows us to calculate ΔT = λs ΔW from ΔD = D0D, as shown in the graph for the minimum and maximum estimated values of λs. Although this cannot be checked visually, the lines are not straight but include a negligible (in these distance ranges) quadratic component.
So, the estimated increase of 0.75 °C in global temperature during the 20th century is equivalent to pushing the Earth between 30 and 40 thousand kilometers towards the Sun. Each extra °C brings us 38,000-54,000 km closer to the star. For those stuck with USCS, each °F is equivalent to 13,000-18,000 miles.
As an alarmist meme, the figure works poorly since no amount of global warming will translate to anything resembling "falling into the Sun": relative changes in distance measure in the permyriads. And, yes, the joke at the beginning of this article is definitely a gross exaggeration.

## Tuesday, September 6, 2016

### Compile-time checking the existence of a class template

(Updated after a suggestion from bluescarni.) I recently had to use C++14's std::is_final but wanted to downgrade to boost::is_final if the former was not available. Trusting __cplusplus implies overlooking the fact that compilers never provide 100% support for any version of the language, and  Boost.Config is usually helpful with these matters, but, as of this writing, it does not provide any macro to check for the existence of std::is_final. It turns out the matter can be investigated with some compile-time manipulations. We first set up some helping machinery in a namespace of our own:
```namespace std_is_final_exists_detail{

template<typename> struct is_final{};

struct helper{};

}```
std_is_final_exists_detail::is_final has the same signature as the (possibly existing) std::is_final homonym, but need not implement any of the functionality since it will be used for detection only. The class helper is now used to write code directly into namespace std, as the rules of the language allow (and, in some cases, encourage) us to specialize standard class templates for our own types, like for instance with std::hash:
```namespace std{

template<>
struct hash<std_is_final_exists_detail::helper>
{
std::size_t operator()(
const std_is_final_exists_detail::helper&)const{return 0;}

static constexpr bool check()
{
using helper=std_is_final_exists_detail::helper;
using namespace std_is_final_exists_detail;

return
!std::is_same<
is_final<helper>,
std_is_final_exists_detail::is_final<helper>>::value;
}
};

}
```
operator() is defined to nominally comply with the expected semantics of std::hash specialization; it is in check that the interesting work happens. By a non-obvious but totally sensible C++ rule, the directive
```using namespace std_is_final_exists_detail;
```
makes all the symbols of the namespace (including is_final) visible as if they were declared in the nearest namespace containing both std_is_final_exists_detail and std, that is, at global namespace level. This means that the unqualified use of is_final in
```!std::is_same<
is_final<helper>,...
```
resolves to std::is_final if it exists (as it is within namespace std, i.e. closer than the global level), and to std_is_final_exists_detail::is_final otherwise. We can wrap everything up in a utility class:
```using std_is_final_exists=std::integral_constant<
bool,
std::hash<std_is_final_exists_detail::helper>::check()
>;
```
and check with a program
```#include <iostream>

int main()
{
std::cout<<"std_is_final_exists: "
<<std_is_final_exists::value<<"\n";
}
```
that dutifully ouputs
```std_is_final_exists: 0
```
with GCC in -std=c+11 mode and
```std_is_final_exists: 1
```
when with -std=c+14. Clang and Visual Studio also handle this code properly.
(Updated Sep 7, 2016.) The same technique can be used to walk the last mile and implement an is_final type trait class relying on std::final but falling back to boost::is_final if the former is not present. I've slightly changed naming and used std::is_void for the specialization trick as it involves a little less typing.
```#include <boost/type_traits/is_final.hpp>
#include <type_traits>

namespace my_lib{
namespace is_final_fallback{

template<typename T> using is_final=boost::is_final<T>;

struct hook{};

}}

namespace std{

template<>
struct is_void<::my_lib::is_final_fallback::hook>:
std::false_type
{
template<typename T>
static constexpr bool is_final_f()
{
using namespace ::my_lib::is_final_fallback;
return is_final<T>::value;
}
};

} /* namespace std */

namespace my_lib{

template<typename T>
struct is_final:std::integral_constant<
bool,
std::is_void<is_final_fallback::hook>::template is_final_f<T>()
>{};

} /* namespace mylib */
```

## Thursday, July 28, 2016

### Passing capturing C++ lambda functions as function pointers

Suppose we have a function accepting a C-style callback function like this:
```void do_something(void (*callback)())
{
...
callback();
}
```
As captureless C++ lambda functions can be cast to regular function pointers, the following works as expected:
```auto callback=[](){std::cout<<"callback called\n";};
do_something(callback);

output: callback called
```
Unfortunately , if our callback code captures some variable from the context, we are out of luck
```int num_callbacks=0;
···
auto callback=[&](){
std::cout<<"callback called "<<++num_callbacks<<" times \n";
};
do_something(callback);

error: cannot convert 'main()::<lambda>' to 'void (*)()'
```
because capturing lambda functions create a closure of the used context that needs to be carried around to the point of invocation. If we are allowed to modify do_something we can easily circumvent the problem by accepting a more powerful std::function-based callback:
```void do_something(std::function<void()> callback)
{
...
callback();
}

int num_callbacks=0;
...
auto callback=[&](){
std::cout<<"callback called "<<++num_callbacks<<" times \n";
};
do_something(callback);

output: callback called 1 times
```
but we want to explore the challenge when this is not available (maybe because do_something is legacy C code, or because we do not want to incur the runtime penalty associated with std::function's usage of dynamic memory). Typically, C-style callback APIs accept an additional callback argument through a type-erased void*:
```void do_something(void(*callback)(void*),void* callback_arg)
{
...
callback(callback_arg);
}
```
and this is actually the only bit we need to force our capturing lambda function through do_something. The gist of the trick is passing the lambda function as the callback argument and providing a captureless thunk as the callback function pointer:
```int num_callbacks=0;
...
auto callback=[&](){
std::cout<<"callback called "<<++num_callbacks<<" times \n";
};
auto thunk=[](void* arg){ // note thunk is captureless
(*static_cast<decltype(callback)*>(arg))();
};
do_something(thunk,&callback);

output: callback called 1 times
```
Note that we are not using dynamic memory nor doing any extra copying of the captured data, since callback is accessed in the point of invocation through a pointer; so, this technique can be advantageous even if modern std::functions could be used instead. The caveat is that the user code must make sure that captured data is alive when the callback is invoked (which is not the case when execution happens after scope exit if, for instance, it is carried out in a different thread).
Postcript
Tcbrindle poses the issue of lambda functions casting to function pointers with C++ linkage, where C linkage may be needed. Although this is rarely a problem in practice, it can be solved through another layer of indirection:
```extern "C" void do_something(
void(*callback)(void*),void* callback_arg)
{
...
callback(callback_arg);
}

...

using callback_pair=std::pair<void(*)(void*),void*>;

extern "C" void call_thunk(void * arg)
{
callback_pair* p=static_cast<callback_pair*>(arg);
p->first(p->second);
}
...
int num_callbacks=0;
...
auto callback=[&](){
std::cout<<"callback called "<<++num_callbacks<<" times \n";
};
auto thunk=[](void* arg){ // note thunk is captureless
(*static_cast<decltype(callback)*>(arg))();
};
callback_pair p{thunk,&callback};
do_something(call_thunk,&p);

output: callback called 1 times
```

## Sunday, February 21, 2016

### A formal definition of mutation independence

Louis Dionne poses the problem of move independence in the context of C++, that is, under which conditions a sequence of operations
```f(std::move(x));
g(std::move(x));
```
is sound in the sense that the first does not interfere with the second. We give here a functional definition for this property that can be applied to the case Louis discusses.
Let X be some type and functions f: X T×X and g: X Q×X. The impurity of a non-functional construct in an imperative language such as C++ is captured in this functional setting by the fact that these functions return, besides the output value itself, a new, possibly changed, value of X. We denote by fT and fX the projection of f onto T and X, respectively, and similarly for g. We say that f does not affect g if
gQ(x) = gQ(fX(x)) ∀xX.
If we define the equivalence relationship ~g in X as
x ~g y iff gQ(x) = gQ(y),
then f does not affect g iff
fX(x) ~g xxX
or
fX([x]g) ⊆ [x]gxX,
where [x]g is the equivalence class of x under ~g.
We say that f and g are mutation-independent if f does not affect g and g does not affect f, that is,
fX([x]g) ⊆ [x]g and gX([x]f) ⊆ [x]fxX,
The following considers the case of f and g acting on separate components of a tuple: suppose that X = X1×X2 and f and g depend on and mutate X1 and X2 alone, respectively, or put more formally:
fT(x1,x2) = fT(x1,x'2),
fX2(x1,x2) = x2,
gQ(x1,x2) = gQ(x'1,x2),
gX1(x1,x2) = x1
for all x1, x'1X1, x2, x'2X2. Then  f and g are mutation-independent (proof trivial). Getting back to C++, given a tuple x, two operations of the form:
```f(std::get<i>(std::move(x)));
g(std::get<j>(std::move(x)));
```
are mutation-independent if i!=j; this can be extended to the case where f and g read from (but not write to) any component of x except the j-th and i-th, respectively.

## Monday, January 11, 2016

### (Oil+tax)-free Spanish gas prices 2014-15

We use the data gathered at our hysteresis analysis of Spanish gas prices for 2014 and 2015 to gain further insight on their dynamics. This is a simple breakdown of gas (or gasoil) price:
Price = oil cost + other costs + taxes + margin.
A barrel of crude oil is refined into several final products totalling approximately the same amount of volume, that is, it takes roughly one liter of crude oil to produce one liter of gas (or gasoil). The simplest allocation model is to use market Brent prices as the oil cost for fuel production (we will see more realistic models later). If we eliminate taxes and oil cost, what remains in the fuel price is other costs plus margin. We plot this number for 95 octane gas and gasoil compared with Brent oil price, all in c€/l, for the period 2014-2015:
 (Oil+tax)-free fuel price, simple cost allocation model [c€/l] Brent oil cost [c€/l]
When we factor out crude oil cost, the remaning parts of the price increase moderately (~25% for gasoline, ~15% for gas). In a scenario of oil price reduction, oil direct costs as a percentage of tax-free fuel prices have consequently dropped from 70% to 50%:
 Oil direct cost / tax-free fuel price,simple cost allocation model
Value-based cost allocation
Crude oil is refined into several final products from high-quality fuel to asphalt, plastic etc. The EIA provides typical yield data for US refineries that we can use as a reasonable approximation to the Spanish case. The volume breakdown we are interested in is roughly:
• Gas: 45%
• Gasoil: 30%
• Other products: 37%
(Note that the sum is greater than 100% because additional components are mixed in the process). Now, as these products have very different prices in the market, it is natural to allocate oil costs proportionally to end-user value:
pricetotal45% pricegasoline + 30% pricegasoil + 37% priceother ,
costgasoline = costoil × pricegasoline / pricetotal ,
costgas = costoil × pricegas / pricetotal
(prices without taxes). Since it is difficult to obtain accurate data on prices for the remaining products, we consider two conventional scenarios where these products are valued at 50% and 25% of the average fuel price, respectively:
• A: priceother = 50% (pricegasoline + pricegasoil)/2
• B: priceother = 25% (pricegasoline + pricegasoil)/2
The figure depicts resulting prices without oil costs or taxes (i.e. other costs plus margin):
 (Oil+tax)-free fuel price, value-based cost allocation [c€/l] Brent oil cost [c€/l]
Unlike with our previous, naïve allocation model, here we see, both in scenarios A and B, that margins for gasoline and gas match very precisely almost all the time: this can be seen as further indication that value-based cost allocation is indeed the model used by gas companies themselves. Visual inspection reveals two insights:
• Short-term, margin fluctuations are countercyclical to oil price. This might be due to an effort from companies to stabilize prices.
• In the two-year period studied, margins grow very much, around 30% for scenario A and 60% for scenario B. This trend has been somewhat corrected in the second half of 2015, though.
The percentual contribution of oil costs to fuel prices (which is by virtue of the cost allocation model exactly the same for gasoline and gas) drops in 2014-15 from 75% to 55% (scenario A) and from 85% to 60% (scenario B).
 Oil direct cost / tax-free fuel price, value-based cost allocation

### Gas price hysteresis, Spain 2015

We begin the new year redoing our hysteresis analysis for Spanish gas prices with data from 2015, obtained from the usual sources:
The figure shows the weekly evolution during 2015 of prices of Brent oil and average retail prices without taxes of 95 octane gas and gasoil in Spain, all in c€ per liter.
For gasoline, the corresponding scatter plot of Δ(gasoline price before taxes) against Δ(Brent price) is
with linear regressions for the entire graph and both semiplanes Δ(Brent price) ≥ 0 and ≤ 0, given by
overall → y = f(x) = b + mx = −0.1210 + 0.2554x,
ΔBrent ≥ 0 → y = f+(x) = b+ + m+x = 0.2866 − 0.0824x,
ΔBrent ≤ 0 → y = f(x) = b + mx = 0.3552 + 0.4040x.
Due to the outlier in the right lower corner (with date August 31), positive variations in oil price don't translate, in average, as positive increments in the price of gasoline. The most worrisome aspect is the fact that b+ and are b positive, which suggests an underlying trend to increase prices when oil is stable.
For gasoil we have
with regressions
overall → y = f(x) = b + mx = −0.0672 + 0.3538x,
ΔBrent ≥ 0 → y = f+(x) = b+ + m+x = −0.2457 + 0.2013x,
ΔBrent ≤ 0 → y = f(x) = b + mx = 0.2468 + 0.3956x.
Again, no "rocket and feather" effect here (in fact,  m+ is slightly smaller than m). Variations around ΔBrent = 0 are fairly symmetrical and, seemingly, fair.

## Monday, December 28, 2015

### How likely?

Yesterday, CUP political party held a general assembly to determine whether to support or not Artur Mas's candidacy to President of the Catalonian regional government. The final voting round among 3,030 representatives ended up in an exact 1,515/1,515 tie, leaving the question unsolved for the moment being. Such an unexpected result has prompted a flurry of Internet activity about the mathematical probability of its occurrence.
The question "how likely was this result to happen?" is of course unanswerable without a specification of the context (i.e. the probability space) we choose to frame the event. A plausible formulation is:
If a proportion p of CUP voters are pro-Mas, how likely is it that a random sample based on 3,030 individuals yields a 50/50 tie?
The simple answer (assuming the number of CUP voters is much larger that 3,030) is Pp(1,015 | 3,030), where Pp(n | N) is the binomial distribution of N Bernouilli trials with probability p resulting in exactly n successes.
The figure shows this value for 40% ≤ p ≤ 60%. At p = 50%, which without further information is our best estimation of pro-Mas supporters among CUP voters, the probability of a tie is 1.45%. A deviation in p of ±4% would have made this result virtually impossible.
A slightly more interesting question is the following:
If a proportion p of CUP voters are pro-Mas, how likely is a random sample of 3,030 individuals to misestimate the majority opinion?
When p is in the vicinity of 50%, there is a non-negligible probability that the assembly vote come up with the wrong (i.e. against voters' wishes) result. This probability is
Ip(1,516, 1,515) if p < 50%,
1 − Pp(1,015 | 3,030) if p = 50%,
I1−p(1,516, 1,515) if p > 50%,
where Ip(a,b) is the regularized beta function. The figure shows the corresponding graph for 3,030 representatives and 40% ≤ p ≤ 60%.
The function shows a discontinuity at the singular (and zero-probability) event p = 50%, in which case the assembly will yield the wrong result always except for the previously studied situation that there is an exact tie (so, the probability of misestimation is 1 − 1.45% = 98.55 %). Other than this, the likelihood of misestimation approaches 49%+ as p tends to 50%. We have learnt that CUP voters are almost evenly divided between pro- and anti-Mas: if the difference between both positions is 0.7% or less, an assembly of 3,030 representatives such as held yesterday will fail to reflect the party's global position in more than 1 out of 5 cases.