Scan#

This module generates a parameter dependent distributions for a selection of sample points (points in parameter space), called spoints throughout the code.

Two classes are defined:

  • Scanner: A general class, set up with a function (specified in set_dfunction()) that depends on points in parameter space and a set of sample points in this parameter space (specified via one of the set_spoints_... methods). The function is then run for every sample point and the results are written to a Data-like object.

  • WilsonScanner: This is a subclass of Scanner that takes a wilson coefficient in the form of a wilson.Wilson object as first argument.

Scanner#

class clusterking.scan.Scanner[source]#

Bases: clusterking.worker.DataWorker

This class is set up with a function (specified in set_dfunction()) that depends on points in parameter space and a set of sample points in this parameter space (specified via one of the set_spoints_... methods). The function is then run for every sample point (in the run() method) and the results are written to a Data-like object.

Usage example:

import clusterking as ck

def myfunction(parameters, x):
    return sum(parameters) * x

# Initialize Scanner class
s = ck.scan.Scanner()

# Set the function
s.set_dfunction(myfunction)

# Set the sample points
s.set_spoints_equidist({
    "a": (-1, 1, 10),
    "b": (-1, 1, 10)
})

# Initialize a Data class to write to:
d = ck.data.Data()

# Run it
r = s.run(d)

# Write back results to data
r.write()
__init__()[source]#

Initializes the clusterking.scan.Scanner class.

property imaginary_prefix: str#

Prefix for the name of imaginary parts of coefficients. Also see e.g. set_spoints_equidist(). Read only.

property spoints#

Points in parameter space that are sampled (read-only).

property coeffs#

The name of the parameters/coefficients/dimensions of the spoints (read only). Set after spoints are set. Does not include the names of the columns of the imaginary parts.

set_progress_bar(show: bool, **kwargs) None[source]#

Settings for progress bar

Parameters
  • show – Show progress bar?

  • **kwargs – Keyword arguments for tqdm progress bar

Returns:

set_dfunction(func: Callable, binning: Optional[Sized] = None, sampling: Optional[Sized] = None, normalize=False, xvar='xvar', yvar='yvar', **kwargs)[source]#

Set the function that generates the distributions that are later clustered (e.g. a differential cross section).

Parameters
  • func – A function that takes the point in parameter space as the first argument (Note: The parameters are given in alphabetically order with respect to the parameter name!). It should either return a float or a np.ndarray. If the binning or sampling options are specified, only float s as return value are allowed.

  • binning – If this parameter is set to an array-like object, we will integrate the function over the specified bins for every point in parameter space.

  • sampling – If this parameter is set to an array-like object, we will apply the function to these points for every point in parameter space.

  • normalize – If a binning is specified, normalize the resulting distribution.

  • xvar – Name of variable on x-axis

  • yvar – Name of variable on y-axis

  • **kwargs – All other keyword arguments are passed to the function.

Returns

None

set_spoints_grid(values: Dict[str, Iterable[float]]) None[source]#

Set a grid of points in sampling space.

Parameters

values

A dictionary of the following form:

{
    <coeff name>: [
        value_1,
        ...,
        value_n
    ]
}

where value_1, …, value_n can be complex numbers in general.

set_spoints_equidist(ranges: Dict[str, tuple]) None[source]#

Set a list of ‘equidistant’ points in sampling space.

Parameters

ranges

A dictionary of the following form:

{
    <coeff name>: (
        <Minimum of coeff>,
        <Maximum of coeff>,
        <Number of bins between min and max>,
    )
}

Note

In order to add imaginary parts to your coefficients, prepend their name with im_ (you can customize this prefix by setting the imaginary_prefix attribute to a custom value.)

Example:

s = Scanner()
s.set_spoints_equidist(
    {
        "a": (-2, 2, 4),
        "im_a": (-1, 1, 10),
    },
    ...
)

Will sample the real part of a in 4 points between -2 and 2 and the imaginary part of a in 10 points between -1 and 1.

Returns

None

add_spoints_noise(generator='gauss', **kwargs) None[source]#

Add noise to existing sample points.

Parameters
  • generator – Random number generator. Default is gauss. Currently supported: gauss.

  • **kwargs – Additional keywords to configure the generator. These keywords are as follows (value assignments are the default values): gauss: mean = 0, sigma = 1

set_no_workers(no_workers: int) None[source]#

Set the number of worker processes to be used. This will usually translate to the number of CPUs being used.

Parameters

no_workers – Number of worker processes

Returns

None

set_imaginary_prefix(value: str) None[source]#

Set prefix to be used for imaginary parameters in set_spoints_grid() and set_spoints_equidist().

Parameters

value – Prefix string

Returns

None

run(data: clusterking.data.data.Data) Optional[clusterking.scan.scanner.ScannerResult][source]#

Calculate all sample points and writes the result to a dataframe.

Parameters

data – Data object.

Returns

ScannerResult or None

Warning

The function set in set_dfunction() has to be a globally defined function in order to do multiprocessing, else you will probably run into the error Can't pickle local object ... that is issued by the python multiprocessing module. If you run into any problems like this, you can always run in single core mode by specifying no_workes=1.

class clusterking.scan.ScannerResult(data: clusterking.data.data.Data, rows: List[List[float]], spoints, md, coeffs)[source]#

Bases: clusterking.result.DataResult

__init__(data: clusterking.data.data.Data, rows: List[List[float]], spoints, md, coeffs)[source]#
property imaginary_prefix: str#

Prefix for the name of imaginary parts of coefficients. Also see e.g. set_spoints_equidist(). Read only.

property spoints#

Points in parameter space that are sampled (read-only).

property coeffs#

The name of the parameters/coefficients/dimensions of the spoints (read only). Set after spoints are set. Does not include the names of the columns of the imaginary parts.

write() None[source]#

WilsonScanner#

class clusterking.scan.WilsonScanner(scale, eft, basis)[source]#

Bases: clusterking.scan.scanner.Scanner

Scans the NP parameter space in a grid and also in the kinematic variable.

Usage example:

import flavio
import functools
import numpy as np
import clusterking as ck

# Initialize Scanner object
s = ck.scan.WilsonScanner(scale=5, eft='WET', basis='flavio')

# Sample 4 points for each of the 5 Wilson coefficients
s.set_spoints_equidist(
    {
        "CVL_bctaunutau": (-1, 1, 4),
        "CSL_bctaunutau": (-1, 1, 4),
        "CT_bctaunutau": (-1, 1, 4)
    }
)

# Set function and binning
s.set_dfunction(
    functools.partial(flavio.np_prediction, "dBR/dq2(B+->Dtaunu)"),
    binning=np.linspace(3.15, 11.66, 10),
    normalize=True
)

# Initialize a Data objects to write to
d = ck.Data()

# Run and write back data
r = s.run(d)
r.write()
__init__(scale, eft, basis)[source]#

Initializes the clusterking.scan.WilsonScanner class.

Parameters
  • scale – Wilson coeff input scale in GeV

  • eft – Wilson coeff input eft

  • basis – Wilson coeff input basis

Note

A list of applicable bases and EFTs can be found at https://wcxf.github.io/bases.html

property scale#

Scale of the input wilson coefficients in GeV (read-only).

property eft#

Wilson coefficient input EFT (read-only)

property basis#

Wilson coefficient input basis (read-only)

class clusterking.scan.WilsonScannerResult(data: clusterking.data.data.Data, rows: List[List[float]], spoints, md, coeffs)[source]#

Bases: clusterking.scan.scanner.ScannerResult