LASH : A Lattice Based Hash Function

In the last few years a number of weaknesses have been found in standardised hash functions such as MD4, MD5, RIPEMD and SHA-1. All of these hash functions are essentially derived from the same design and are constructed using somewhat ad-hoc techniques. In contrast, other areas of cryptography have replaced ad-hoc construction with well defined sets of design principles. Examples include the wide-trail design strategy of AES, or the rigorous application of reductionist provable security techniques as in the context of RSA-OEAP. While the SHA-2 family of hash functions is not yet known to succumb to the recent attack techniques, its design principles are so similar to SHA-1 that we have no guarantee an attack will not appear in the near future.

LASH is a new hash function design whose security properties are loosely based on the problem of finding small vectors in lattices; originally this approach was presented by Goldreich, Goldwasser and Halevi in 1996. LASH takes the idea behind the construction of Goldreich et al. and obtains an efficient hash function whose design is partly motivated by aspects of implementation such as speed and memory footprint, and the ability to fully utilise processor features available in current computer architectures.

Download and Installation

In order to make experimentation with and attacks on LASH easier, we have developed a reference implementation which also includes a performance oriented SIMD version. This reference source code includes driver programs to create and process test vectors, and to benchmark LASH.

Before you start, note that LASH is distributed under the terms and conditions of the GPL. Since the LASH source code is written in C, you need a (preferably modern) compiler such as GCC. The easiest way to get hold of LASH is to download the current distribution set:

Configuration of LASH is via the lash.h header file; there are three main sets of options, in each set only one option should be turned on by uncommenting it:
// !!! define exactly one hash type
  #define LASH_CONFIG_SIZE_160                  /* 160-bit output version    */
//#define LASH_CONFIG_SIZE_256                  /* 256-bit output version    */
//#define LASH_CONFIG_SIZE_384                  /* 384-bit output version    */
//#define LASH_CONFIG_SIZE_512                  /* 512-bit output version    */

// !!! define exactly one compression type
//#define LASH_CONFIG_COMP_BYTE                 /* byte based compression    */
  #define LASH_CONFIG_COMP_WORD                 /* word based compression    */
//#define LASH_CONFIG_COMP_SIMD                 /* simd based compression    */

// !!! define exactly one computation type
//#define LASH_CONFIG_MATRIX_ALL                /* everything precomputed    */
  #define LASH_CONFIG_MATRIX_ROW                /* one row    precomputed    */
//#define LASH_CONFIG_MATRIX_COL                /* one col    precomputed    */
The first set dictate the LASH parameterisation, for example aspects such as the output and block size. The second set dictate which implementation method is used for the compression function; the essentially boils down to how much parallelism we can use to perform the arithmetic operations. The third set dictates how much of the matrix is pre-computed. At the moment there isn't an option to pre-compute nothing although this is easy and could be desirable for an embedded implementation. By default LASH is configured to give 160-bit output, to use word-based (i.e. 32-bit based) compression function, and to pre-compute a whole row of the matrix.

Funding

People

Publications

Presentations