Skip to main content

General-purpose Automatic Memoization for Recursive Functions in C++11

Memoization is a widely known optimization technique used primarily to speed up computer programs by having function calls avoid repeating the calculation of results for previously processed inputs. Repeated calculations are avoided by reusing previously computed results, which must be cached such that look-up is faster than recomputing.

Consider a simple fibonacci program
unsigned long fibonacci(unsigned n)
{
return (n < 2) ? n : fibonacci(n - 1) + fibonacci(n - 2);
}
This algorithm is a frustratingly slow way of computing the Nth fibonacci number (N starting at 0). It does a lot of redundant recomputations. But the beauty of this program is that it is really simple. To speed it up without changing its logic significantly, we could use memoization.

Using some clever C++11 techniques, it is possible to memoize this function, which looks like below.
unsigned long fibonacci(unsigned n)
{
return (n < 2) ? n :
memoized_recursion(fibonacci)(n - 1) +
memoized_recursion(fibonacci)(n - 2);
}
To solve this problem, I took inspiration from this post on automatic memoization. I'll go in lot more detail here including recursive functions and memory management. Here we go!

The memoize function I'm using is slightly different than that of the post above.
template <typename ReturnType, typename... Args>
std::function<ReturnType (Args...)>
memoize(ReturnType (*func) (Args...))
{
auto cache = std::make_shared<std::map<std::tuple<Args...>, ReturnType>>();
return ([=](Args... args) mutable {
std::tuple<Args...> t(args...);
if (cache->find(t) == cache->end())
(*cache)[t] = func(args...);
return (*cache)[t];
});
}
Function memoize accepts a pointer to a free function, wraps it in a lambda, and turns the lambda into a std::function. Returning a a std::function is a common C++11 idiom of returning a lambda from a function that creates it. The implementation of the lambda is pretty straight forward if you are familiar with C++11 variadic templates. It creates a tuple of arguments and checks if it exists in the cache. In that case, the stored result is returned instead of recomputing it. The cache used for mapping arguments to the return value, is allocated dynamically. A std::shared_ptr manages the memory. The lambda copies the std::shared_ptr by value. As long as there is at least one std::function alive, the cache will remain intact.

Memoized functions may be called from different places. It is quite cumbersome to pass the memoized function everywhere it is called. There should be a way to look up a memoized version of the function without loosing the state. So our next step is to make the same memoized function available from anywhere in the program. We need a map of function pointer to a memorized std::function. Specifically, we need a std::unordered_map for fast lookup.
template <typename F_ret, typename...  F_args>
std::function<F_ret (F_args...)>
memoized_recursion(F_ret (*func)(F_args...))
{
typedef std::function<F_ret (F_args...)> FunctionType;
static std::unordered_map<decltype(func), FunctionType> functor_map;

if(functor_map.find(func) == functor_map.end())
functor_map[func] = memoize(func);

return functor_map[func];
}
Here I introduce "memoized_recursion" function that our recursive fibonacci function is calling. It has a static std::unordered_map. It simply looks up the memorized std::function based on the function pointer value. If it does not find one, it creates it and stores it for subsequent access. Function pointers are unique; so there are no collisions possible. Here is how to call it.
memorized_recursion(fibonacci)(10);
The solution is not finished yet though. Memoization obviously builds up state very fast. If many functions are memoized with a large number of parameters, the state grows explosively. There must be some way to reclaim the memory.

Remember that the memoized state grows in the lambda. The dynamically allocated map stores the cache for corresponding function. We need to access the object that is hidden inside a lambda. Lambda has a compiler-defined type and only thing you can do with it is call it. So how would we clear the cache it is building up?

The answer is surprisingly simple! Just assign the memoizer (the lambda) with another default initialized memoizer.

We already have memoize function, which returns a default initialized memoizer. We simply assign the new one in place of the old one. Here’s how the new memoized_recursion looks like
enum class Cache : unsigned int { NO_RECLAIM, RECLAIM };

template <typename F_ret, typename... F_args>
std::function<F_ret (F_args...)>
memoized_recursion(F_ret (*func)(F_args...), Cache c = Cache::NO_RECLAIM)
{
typedef std::function<F_ret (F_args...)> FunctionType;
static std::unordered_map<decltype(func), FunctionType> functor_map;

if(Cache::RECLAIM == c)
return functor_map[func] = memoize(func);

if(functor_map.find(func) == functor_map.end())
functor_map[func] = memoize(func);

return functor_map[func];
}
I’m using strongly typed enums to pass programmer’s intent to clear the cache. Here is how you call it.
memoized_recursion(fibonacci, Cache::RECLAIM);

Purely Static Memoizer


Strictly speaking, using an std::unordered_map in memoized_recursion function is not necessary. It is an O(1) mapping of function pointers to their corresponding memoized function objects (lambda wrapped in a std::function). An alternative way of achieving the same mapping is using purely static memoizer. I call it pure because there is no dynamic allocation like in std::unordered_map. The functors are stored as static objects only. This is possible only if memoized_recursion can be specialized individually for all possible free functions! Note that each free function is guaranteed to have a unique pointer value and pointer values can be used as template parameters. So here is how to combine all these things in a static_memoizer.
template <typename Sig, Sig funcptr>
struct static_memoizer;

template <typename F_ret, typename... F_args, F_ret (*func)(F_args...)>
struct static_memoizer<F_ret (*)(F_args...), func>
{
static
std::function<F_ret (F_args...)> &
get(Cache c = Cache::NO_RECLAIM)
{
static std::function<F_ret (F_args...)> mfunc (memoize(func));

if(Cache::RECLAIM == c)
mfunc = memoize(func);

return mfunc;
}
};

#define STATIC_MEMOIZER(func) static_memoizer<decltype(&func), &func>


The STATIC_MEMOIZER macro simplifies the use of the static_memoizer. It extracts the type of the function pointer using decltype and passes it (the type) as the first parameter of the template. The second parameter is the actual function pointer. Passing the function pointer as a template parameter is important because many functions potentially share the same signature but never the same pointer.

The static objects used by the static_memoizer are different from that of memoized_recursion. So we've to rewrite the fibonacci function to use the static_memoizer.
unsigned long fibonacci(unsigned n)
{
return (n < 2) ? n :
STATIC_MEMOIZER(fibonacci)::get()(n - 1) +
STATIC_MEMOIZER(fibonacci)::get()(n - 2);
}
That's all for now. I hope you enjoyed it. See live code.

Comments

Unknown said…
The first idea when I saw this title was something like:

unsigned long fibonacci(unsigned n)
{
return memrec([](fib)-> { return [&](k)->
{
return (k < 2) ? k : fib(k - 1) + fib(k - 2);
}})(n);
}

Well this was easy in C#..
dan said…
I'm not that familiar with C++11, so I may be missing something, but how is this method thread safe?
Sumant said…
@Lance: Not sure if I understand your point. I could not find relevant documentation on memrec. Also, it is a lot of code to write for *each* recursive function. Not to menion it is hard to understand. Compare that with the second code snippet in the post.

@dan: It is not thread-safe. One possible way of doing that would be to protect the memoized_recursion function with a lock and store a different memoizer for each thread in the map. That way different threads won't contend on updating the cache.
Unknown said…
memrec was just yet another memorized_recurse function to be written. The code I post demo my thought of another way of how it is used.

In my version, every fib is memorizing enabled, so you don't need to call a method to get a modified function every time before invoking. Although, you can store it in a variable in the very beginning of method.

Additionally, my function can be used to create a memorized recursive function without wrapping inside the original function.

Maybe memrec looks like

std::function memrec(std::function , std::function> map)
{
return [](x)->
{
if ( cache[x] )
{
return cache[x];
}
else
{
return map(memrec(map))(x);
}
};
}

Looks memrec is a modified version of fixed point function.
Unknown said…
This comment has been removed by the author.
Unknown said…
blogger mutes the "lesser than" mark..

std::function〈ulong(ulong)〉 memrec(std::function〈std::function〈ulong(ulong)〉(std::function〈ulong(ulong)〉)〉 map)
Unknown said…
oh, the second return should store the value in the cache, so should be

return cache[x] = map(memrec(map))(x);


Sorry for multiple replies, but I don't know how to modify the existing one.
Anonymous said…
I think you missed the argument in the last line which confuses.

memoized_recursion(fibonacci, Cache::RECLAIM)(10);
Jacky Lee said…
You may take a look at BOOST::Flyweight
Dan M said…
Hello,

I am trying to implement your memoized Lambda function. I am quite new to lambda functions.

My function looks like this:

using ThermoPropertiesSubstanceFunction =
std::function;

thermo_properties_substance_fn = [=](double T, double &P, std::string symbol)
{
return thermoPropertiesSubstance(T, P, symbol);
};
thermo_properties_substance_fn = memoize(thermo_properties_substance_fn);

As you can see I need to get the value of P which can change when executing the function. But this creates a problem when storing the arguments of the function in the cache, because P is passed by reference. Is there a workaround this? or Should I pass P through the returned object of the function?

Thank you!
Sumant said…
@DanM, references make things quite awkward. Consider this approach. I'll write about it later. https://wandbox.org/permlink/UOATGL8A3W1zFAxE

Popular Content

Unit Testing C++ Templates and Mock Injection Using Traits

Unit testing your template code comes up from time to time. (You test your templates, right?) Some templates are easy to test. No others. Sometimes it's not clear how to about injecting mock code into the template code that's under test. I've seen several reasons why code injection becomes challenging. Here I've outlined some examples below with roughly increasing code injection difficulty. Template accepts a type argument and an object of the same type by reference in constructor Template accepts a type argument. Makes a copy of the constructor argument or simply does not take one Template accepts a type argument and instantiates multiple interrelated templates without virtual functions Lets start with the easy ones. Template accepts a type argument and an object of the same type by reference in constructor This one appears straight-forward because the unit test simply instantiates the template under test with a mock type. Some assertion might be tested in

Multi-dimensional arrays in C++11

What new can be said about multi-dimensional arrays in C++? As it turns out, quite a bit! With the advent of C++11, we get new standard library class std::array. We also get new language features, such as template aliases and variadic templates. So I'll talk about interesting ways in which they come together. It all started with a simple question of how to define a multi-dimensional std::array. It is a great example of deceptively simple things. Are the following the two arrays identical except that one is native and the other one is std::array? int native[3][4]; std::array<std::array<int, 3>, 4> arr; No! They are not. In fact, arr is more like an int[4][3]. Note the difference in the array subscripts. The native array is an array of 3 elements where every element is itself an array of 4 integers. 3 rows and 4 columns. If you want a std::array with the same layout, what you really need is: std::array<std::array<int, 4>, 3> arr; That's quite annoying for

Covariance and Contravariance in C++ Standard Library

Covariance and Contravariance are concepts that come up often as you go deeper into generic programming. While designing a language that supports parametric polymorphism (e.g., templates in C++, generics in Java, C#), the language designer has a choice between Invariance, Covariance, and Contravariance when dealing with generic types. C++'s choice is "invariance". Let's look at an example. struct Vehicle {}; struct Car : Vehicle {}; std::vector<Vehicle *> vehicles; std::vector<Car *> cars; vehicles = cars; // Does not compile The above program does not compile because C++ templates are invariant. Of course, each time a C++ template is instantiated, the compiler creates a brand new type that uniquely represents that instantiation. Any other type to the same template creates another unique type that has nothing to do with the earlier one. Any two unrelated user-defined types in C++ can't be assigned to each-other by default. You have to provide a