![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
https://dotat.at/@/2023-06-21-pcg64-dxsm.html
Last week I was interested to read about the proposed math/rand/v2
for Golang's standard library. It mentioned a new-ish flavour
of PCG called DXSM which I had not previously encountered. This blog
post collects what I have learned about PCG64 DXSM. (I have not found
a good summary elsewhere.)
background
Here's the bit where the author writes their life story, and all the readers skip several screens full of text to get to the recipe.
Occasionally I write randomized code that needs a pseudorandom number generator that is:
- Not gratuitously terrible;
- Seedable, so I can reproduce test cases;
- Does not kill multicore performance by using global mutable state;
- Runs everywhere without portability faff.
I can maybe get one or two of my requirements, if I am lucky.
So I grab a copy of PCG and use that. It's only like 10 lines of code, and PCG's creator isn't an arsehole.
how pcg works
PCG random number generators are constructed from a collection of linear congruential generators, and a collection of output permutations.
A linear congruential random number generator looks like:
state = state * mul + inc;
The multiplier mul
is usually fixed; the increment inc
can be
fixed, but PCG implementations usually allow it to be chosen when the
RNG is seeded.
A bare LCG is a bad RNG. PCG turns an LCG into a good RNG:
- The LCG state is twice the size of the RNG output
- The LCG state is permuted to produce the RNG output
PCG's output permutations have abbreviated names like XSH (xor-shift), RR (random rotate), RXS (random xor-shift), XSL (xor shift right [sic]), etc.
using pcg
The reference implementation of PCG in C++ allows you to mix and match LCGs and output permutations at a variety of integer sizes. There is a bewildering number of options, and the PCG paper explains the desiderata at length. It is all very informative if you are a researcher interested in exploring the parameter space of a family of random number generators.
But it's all a bit too much when all I want is a replacement for
rand()
and srand()
.
pcg32
For 32-bit random numbers, PCG has a straightforward solution in the
form of pcg_basic.c
.
In C++ PCG this standard 32-bit variant is called
pcg_engines::setseq_xsh_rr_64_32
or simply pcg32
for short.
(There is a caveat, tho: pcg32_boundedrand_r()
would be faster if it
used Daniel Lemire's nearly-divisionless unbiased rejection sampling
algorithm for bounded random numbers.)
pcg64
For 64-bit random numbers it is not so simple.
There is no 64-bit equivalent of pcg_basic.c
. The reference
implementations have a blessed flavour called pcg64
, but it isn't
trivial to unpick the source code's experimental indirections to get a
10 line implementation.
And even if you do that, you won't get the best 64-bit flavour, which is:
pcg64 dxsm
PCG64 DXSM is used by NumPy. It is a relatively new flavour of PCG,
which addresses a minor shortcoming of the original pcg64
that arose
in the discussion when NumPy originally adopted PCG.
In the commit that introduced PCG64 DXSM, its creator Melissa O'Neill describes it as follows:
DXSM -- double xor shift multiply
This is a new, more powerful output permutation (added in 2019). It's a more comprehensive scrambling than RXS M, but runs faster on 128-bit types. Although primarily intended for use at large sizes, also works at smaller sizes as well.
As well as the DXSM output permutation, pcg64_dxsm()
uses a "cheap
multiplier", i.e. a 64-bit value half the width of the state, instead
of a 128-bit value the same width as the state. The same multiplier is
used for the LCG and the output permutation. The cheap multiplier
improves performance: pcg64_dxsm()
has fewer full-size 128 bit
calculations.
O'Neill wrote a longer description of the design of PCG64 DXSM, and the NumPy documentation discusses how PCG64DXSM improves on the old PCG64.
In C++ PCG PCG64 DXSM's full name is
pcg_engines::cm_setseq_dxsm_128_64
. As far as I can tell it doesn't
have a more friendly alias. (The C++ PCG typedef
pcg64
still refers
to the previously preferred xsl_rr
variant.)
In the Rust rand_pcg
crate PCG64 DXSM is called
Lcg128CmDxsm64
, i.e. a linear congruential generator with 128 bits
of state and a cheap multiplier, using the DXSM permutation with 64
bits of output.
That should be enough search keywords and links, I think.
pcg64 dxsm implementation
OK, at last, here's the recipe that you were looking for:
// SPDX-License-Identifier: 0BSD OR MIT-0
typedef struct pcg64 {
uint128_t state, inc;
} pcg64_t;
static inline uint64_t
pcg64_dxsm(pcg64_t *rng) {
/* cheap (half-width) multiplier */
const uint64_t mul = 15750249268501108917ULL;
/* linear congruential generator */
uint128_t state = rng->state;
rng->state = state * mul + rng->inc;
/* DXSM (double xor shift multiply) permuted output */
uint64_t hi = (uint64_t)(state >> 64);
uint64_t lo = (uint64_t)(state | 1);
hi ^= hi >> 32;
hi *= mul;
hi ^= hi >> 48;
hi *= lo;
return(hi);
}
The algorithm for seeding the RNG is the same for all flavours of PCG:
pcg64_t
pcg64_seed(pcg64_t rng) {
/* must ensure rng.inc is odd */
rng.inc = (rng.inc << 1) | 1;
rng.state += rng.inc;
pcg64_dxsm(&rng);
return(rng);
}
no subject
Date: 2023-06-22 08:15 (UTC)pcg64_seed()
function. It calls a subroutine calledpcg64()
which isn't shown here at all (perhaps it was meant to refer topcg64_dxsm()
above?), but more importantly, it seems to be evolving thepcg64_t
from its previous state, rather than – as I'd have expected, especially given your own comment about reproducing a failing test run – overwriting the entire state with something derived fresh from an input seed parameter. What's the procedure for doing the latter?no subject
Date: 2023-06-22 08:57 (UTC)Whoops, thanks, I fixed the function name.
The seed function just conditions the raw seed values you pass in. You can call it with a structure literal, or like
no subject
Date: 2023-06-22 09:02 (UTC)no subject
Date: 2023-06-22 09:26 (UTC)heh, that's a much better name for the function :-)
no subject
Date: 2023-06-22 09:45 (UTC)(It's quite nice to be able to enter an arbitrary string as your puzzle random seed – especially since it lets two players bilaterally generate a new puzzle to race on, by means of each of them contributing half the seed string and then the resulting puzzle will be new to both of them.)
Which reminds me, speaking of SHA-256, that I only recently encountered the Conway's Life wiki, and its definition of a Life pattern "occurring in the wild", in the sense of having evolved from a random configuration as opposed to having been deliberately designed. At first glance that seems like a very slippery concept – more and more things can be deemed to occur in the wild the more random test runs you do, and worse, after you've done a test run and observed a thing, how can you convince anyone else you really saw it, or that your starting configuration was really random? But they came up with a clever definition that completely solves all these problems: a Life pattern is deemed to occur in the wild if and only if it is generated from a 16×16 starting pattern for which a SHA-256 preimage is known.
no subject
Date: 2023-06-22 10:39 (UTC)Oh, that definition of "in the wild" is marvellous :-)
I guessed you might be interesed in RNGs for puzzle purposes, I'm pleased I guessed right!
no subject
Date: 2023-06-22 11:02 (UTC)Of course SHA-1 isn't nearly as respectable now as it was back in 2004! But this isn't really a cryptographic use of it. It was more that it was a piece of code I already had lying around, which would definitely generate numbers that were random enough (I certainly wouldn't have trusted an LCG), and which was well specified and portable so that random numbers on all my supported platforms would come out the same, and you'd be able to transfer random seeds between (the same build of) the puzzles on different OSes or CPU architectures.
So I don't think it's worth upgrading to using SHA-256 in the same style; that's just a slightly slower version of something that's already good enough as it is. But downgrading to a less cryptographic but faster RNG, relegating the secure hash function to just the preprocessing step, is at least potentially interesting to me, although I'm a bit concerned about the 'sufficiently random' aspect, and when I've contemplated this idea in the past, I've been thinking more in terms of Mersenne Twister than anything that has ever been near an LCG.
But I hadn't previously encountered the idea that there is anything simple you can do to an LCG to make it significantly less sucky, so I may investigate further, if tuits permit!
no subject
Date: 2023-06-22 11:19 (UTC)My understanding is that Mersenne Twister is not great compared to pcg and its competitors. It has a monstrously huge state, its code is bigger and slower, and it fails BigCrush (tho the test that it fails was designed to find flaws in Mersenne Twister...)
no subject
Date: 2023-06-22 14:53 (UTC)The other RNG I vaguely had in mind was ISAAC, mostly because I once ran into an "LCGs are horrendous" type problem with a Perl script using Perl's built-in
rand
(aka traddrand48
), and looking around for convenient alternatives, found that Perl offersMath::Random::ISAAC
which turned out to produce much more acceptable output in the particular case in question.However, ISAAC is described in Wikipedia as cryptographic, which suggests it might still be slower than necessary. Also it looks old enough that I felt surely it would have been improved on by now!
no subject
Date: 2023-06-22 15:19 (UTC)Oh wow, I saw the acronym expansion "indirection, shift, accumulate, add, and count" and the "I" immediately prompted me to go and look at the source code, and yes, ISAAC uses part of its secret state to index into its secret state. Not what one would consider a safe design for a cryptographic algorithm in the last couple of decades!
And 2kbits of state is better than 20kbits of state but not so nice as 256 bits :-)
no subject
Date: 2023-06-22 15:21 (UTC)