Using an irradiance cache, diffuse indirect light can be sped up, by sampling at fewer locations an interpolating to others, on simple scenes with little detail this can easily give 10x speed ups, on more complicated scenes it varies. It's an approximation and so is not guaranteed to be correct, and will result in low frequency rather than high frequency noise.
Before shading all the pixels in a part, the irradiance cache is built from the samples that will be shaded later. They do a cache lookup one after the other, and if no suitable points to interpolate from are found, a new samples is added. In the second pass, when actual shading happens, a lookup is then again done into the cache. Doing this in two passes avoids some striping artifacts and means more samples can be used to interpolate from.
A weak point is that the irradiance cache is built per part. This means that fewer memory is used, results appear quicker on the screen, and there are no threading issues. However it can give differences between parts if not enough points are in the cache or the number of ray samples is too low. There would have been low frequency noise anyway if these were too low, but a sharp difference between parts is more objectionable. A single unified cache may a good thing to add in the future, to get rid of this problem.
Another weak point is that it does not work well with raytraced reflections/refractions currently. Only samples directly visible are inserted into the cache in the first pass. That means that for samples after a reflection/refraction interpolation results may show artifacts as if a single pass was being used.
The main trick is finding a good way to measure how accurate a sample is to interpolation to another position. The implementation has been tweaked a lot to minimize artifacts, and no tweakable parameter has been exposed to the user. The only thing that can be tweaked now is the number of raytracing samples. It may be useful to expose irradiance cache parameters in the future, though ideally no parameter tweaking should be needed.
See the papers linked in the code to understand the meaning of the various formulas. The one thing not documented in those is the least squares interpolation/extrapolation. This means that when you take the values from surrounding points, it will not simply average them. Rather it will do a a least squares fit (x, y, z, color channel), and uses that instead. It means that now it will also do extrapolation rather than just interpolation, and helps significantly at part boundaries and occluding geometry. No irradiance gradients were used, we found these difficult to control and hard to combine with QMC sampling, some other render engines also don't use them but require on least squares fitting instead.
With the irradiance cache it is no longer possible to do full BRDF shading. Instead we use a trick from "An approximate global illumination system for computer generated films". Instead of interpolating RGB, we interpolate the 3 vectors with the average incoming light direction, one for each color channel. These are then used for shading, and can give plausible result for various shaders and bump mapping.