Optimizing Python code for computations of pair-wise distances - Part I

Sep 30, 2019

Open In Colab

In this se­ries of posts, sev­eral dif­fer­ent Python im­ple­men­ta­tions are pro­vided to com­pute the pair-wise dis­tances in a pe­ri­odic bound­ary con­di­tion. The per­for­mances of each method are bench­marked for com­par­i­son. I will in­ves­ti­gate the fol­low­ing meth­ods.

Article Series

Background #

In mol­e­c­u­lar dy­nam­ics sim­u­la­tions or other sim­u­la­tions of sim­i­lar types, one of the core com­pu­ta­tions is to com­pute the pair-wise dis­tances be­tween par­ti­cles. Suppose we have NN par­ti­cles in our sys­tem, the time com­plex­ity of com­put­ing their pair-wise dis­tances is O(N2)O(N^2). This is the best we can do when the whole set of pair-wise dis­tances are needed. The good thing is that for ac­tual sim­u­la­tion, in most the cases, we don’t care about the dis­tances if it is larger than some thresh­old. In such a case, the com­plex­ity can be greatly re­duced to O(N)O(N) us­ing neigh­bor list al­go­rithm.

In this post, I won’t im­ple­ment the neigh­bor list al­go­rithm. I will as­sume that we do need all the dis­tances to be com­puted.

If there is no pe­ri­odic bound­ary con­di­tion, the com­pu­ta­tion of pair-wise dis­tances can be di­rectly cal­cu­lated us­ing the built-in Scipy func­tion scipy.spatial.distance.pdist which is pretty fast. However, with pe­ri­odic bound­ary con­di­tion, we need to roll our own im­ple­men­ta­tion. For a sim­ple demon­stra­tion with­out los­ing gen­er­al­ity, the sim­u­la­tion box is as­sumed to be cu­bic and has its lower left for­ward cor­ner at the ori­gin. Such set up would sim­plify the com­pu­ta­tion.

The ba­sic al­go­rithm of cal­cu­lat­ing the dis­tance un­der pe­ri­odic bound­ary con­di­tion is the fol­low­ing,

Δ=σ[σ/L]L\Delta = \sigma - \left[\sigma/L\right] * L

where σ=xixj\sigma = x_i - x_j and LL is the length of the sim­u­la­tion box edge. []\left[\cdot\right] de­note the near­est in­te­ger. xix_i and xjx_j is the po­si­tion of par­ti­cle ii and jj at one di­men­sion. This com­putes the dis­tance be­tween two par­ti­cles along one di­men­sion. The full dis­tance would be the square root of the sum­ma­tion of Δ\Delta from all di­men­sions.

Basic setup:

  • All codes shown are us­ing Python ver­sion 3.7

  • The num­ber of par­ti­cles is n

  • The po­si­tions of all par­ti­cles are stored in a list/​ar­ray of the form [[x1,y1,z1],[x2,y2,z2],...,[xN,yN,zN]] where xi is the co­or­di­nates for par­ti­cle i.

  • The length of sim­u­la­tion box edge is l.

  • We will use li­braries and tools such as numpy, itertools, math, numba, cython.

Pure Python Implementation #

To clar­ify first, by pure, I mean that only built-in li­braries of python are al­lowed. numpy, scipy or any other third-party li­braries are not al­lowed. Let us first de­fine a func­tion to com­pute the dis­tance be­tween just two par­ti­cles.

import math

def distance(p1, p2, l):
    """
    Computes the distance between two points, p1 and p2.

    p1/p2:python list with form [x1, y1, z1] and [x2, y2, z2] representing the cooridnate at that dimension
    l: the length of edge of box (cubic/square box)
    return: a number (distance)
    """
    dim = len(p1)
    D = [p1[i] - p2[i] for i in range(dim)]
    distance = math.sqrt(sum(map(lambda x: (x - round(x / l) * l) ** 2.0, D)))
    return distance

Now we can de­fine the func­tion to it­er­ate over all pos­si­ble pairs to give the full list of pair-wise dis­tances,

def pdist(positions, l):
    """
    Compute the pair-wise distances between every possible pair of particles.

    positions: a python list in which each element is a a list of cooridnates
    l: the length of edge of box (cubic/square box)
    return: a condensed 1D list
    """
    n = len(positions)
    pdistances = []
    for i in range(n-1):
        for j in range(i+1, n):
            pdistances.append(distance(positions[i], positions[j], l))
    return pdistances

The func­tion pdist re­turns a list con­tain­ing dis­tances of all pairs. Let’s bench­mark it!

import numpy as np
n = 100
positions = np.random.rand(n,3).tolist() // convert to python list

%timeit pdist(positions, 1.0)
14.8 ms ± 2.42 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Such speed is suf­fi­cient if n is small. In the above ex­am­ple, we al­ready uti­lize the built-in map func­tion and list com­pre­hen­sion to speed up the com­pu­ta­tion. Can we speed up our code fur­ther us­ing only built-in li­braries? It turns out that we can. Notice that in the func­tion pdist, there is a nested loop. What that loop is do­ing is to it­er­ate over all the com­bi­na­tions of par­ti­cles. Luckily, the built-in mod­ule itertools pro­vides a func­tion combinations to do just that. Given a list ob­ject lst or other it­er­able ob­ject, itertools.combinations(lst, r=2) gen­er­ates a it­er­a­tor of all unique pair-wise com­bi­na­tion of el­e­ments from lst with­out du­pli­cates. For in­stance list(itertools.combinations([1,2,3], r=2)) will re­turn [(1,2),(1,3),(2,3)]. Utilizing this func­tion, we can rewrite the pdist func­tion as the fol­low­ing,

def pdist_v2(positions, l):
    # itertools.combinations returns an iterator
    all_pairs = itertools.combinations(positions, r=2)
    return [math.sqrt(sum(map(lambda p1, p2: (p1 - p2 - round((p1 - p2) / l) * l) ** 2.0, *pair))) for pair in all_pairs]
  • First, we use itertool.combinations() to re­turn an it­er­a­tor all_pairs of all pos­si­ble com­bi­na­tion of par­ti­cles. r=2 means that we only want pair-wise com­bi­na­tions (no triplets, etc)

  • We loop over the all_pairs us­ing list com­pre­hen­sion us­ing [do_something(pair) for pair in all_pairs].

  • item is a tu­ple of co­or­di­nates of two par­ti­cles, ([xi,yi,zi],[xj,yj,zj]).

  • We use *pair to un­pack the tu­ple ob­ject pair and then use map and lambda func­tion to com­pute the square of dis­tances along each di­men­sion. p1 and p2 rep­re­sents the co­or­di­nates of a pair of par­ti­cles.

Rigorously speak­ing, itertools.combinations takes an it­er­able ob­ject as an ar­gu­ment and re­turns an it­er­a­tor. I rec­om­mend to read this ar­ti­cle and the of­fi­cial doc­u­men­ta­tion to un­der­stand the con­cept of it­er­able/​it­er­a­tor/​gen­er­a­tor which is very im­por­tant for ad­vanced python pro­gram­ming.

Now let’s bench­mark the pdist_v2 and com­pare it to pdist. To make com­par­i­son sys­tem­at­i­cally, I bench­mark the per­for­mance for dif­fer­ent val­ues of n and plot the speed as the func­tion of n. The re­sult is the be­low,

Benchmark: com­par­i­son be­tween pdist and pdis­t_v2

If this is plot­ted on a log-log scale, one can read­ily see that both curves scale as N2N^2 which is ex­pected.

Conclusion #

The pdist_v2 im­ple­men­ta­tion is about 38% faster than pdist ver­sion. I guess the take-home mes­sage from this re­sult is that re­plac­ing ex­plicit for loop with func­tions like map and itertools can boost the per­for­mance. However, one needs to make a strate­gic de­ci­sion here, as the pdist ver­sion with the ex­plicit loop is much more read­able and eas­ier to un­der­stand whereas pdist_v2 re­quires a more or less ad­vanced un­der­stand­ing of Python. Sometimes the read­abil­ity and maintabil­ity of code are more im­por­tant than its per­for­mance.

In the bench­mark code above, we con­vert the numpy ar­ray of po­si­tions to python list. Since numpy ar­ray can be treated just like a python list (but not vice versa), we can in­stead di­rectly pro­vide numpy ar­ray as the ar­gu­ment in both pdist and pdist_v2. However, one can ex­per­i­ment a lit­tle bit to see that us­ing numpy ar­ray di­rectly ac­tu­ally slow down the com­pu­ta­tion a lot (about 5 times slower on my lap­top). The mes­sage is that mix­ing numpy ar­ray with built-in func­tions such as map or itertools harms the per­for­mance. Instead, one should al­ways try to use numpy na­tive func­tions if pos­si­ble when work­ing with numpy ar­ray.


In the next post, I will show how to use Numpy to do the same cal­cu­la­tion but faster than the pure python im­ple­men­ta­tion shown here.