TransitionAmplitudeSum#
parametricmatrixmodels.modules.TransitionAmplitudeSum
- class TransitionAmplitudeSum(num_observables=None, output_size=None, Ds=None, init_magnitude=0.01, centered=True)[source]#
Bases:
BaseModuleA module that computes the sum of transition amplitudes of trainable observables given an input of state vectors. The output can be centered by subtracting half the operator norm squared of each observable.
To produce \(q\) output values, given a single input of \(r\) state vectors of size \(n\) (shape
(n, r)), denoted by \(v_i\), \(i=1, \ldots, r\), this module uses \(q\times l\) trainable observables \(D_{11}, D_{12}, \ldots, D_{1l}, D_{21}, \ldots, D_{ql}\) (each of shape(n, n)) to compute the output:\[\begin{split}z_k = \sum_{i,j=1}^r &\left( \sum_{m=1}^l |v_i^H D_{km} v_j|^2\\ &\quad - \frac{1}{2} ||D_{km}||^2_2 \right)\end{split}\]for \(k=1, \ldots, q\). This is equivalent to
\[\begin{split}z_k &= \sum_{m=1}^l \left( \sum_{i,j=1}^r \left( |v_i^H D_{km} v_j|^2 \right)\\ &\quad - \frac{r^2}{2} ||D_{km}||^2_2 \right)\end{split}\]where \(||\cdot||_2\) is the operator 2-norm (largest singular value) so for Hermitian \(D\), \(||D||_2\) is the largest absolute eigenvalue.
The \(-\frac{1}{2} ||D_{km}||^2_2\) term centers the value of each term and can be disabled by setting the
centeredparameter toFalse.Warning
This module assumes that the input state vectors are normalized. If they are not, the output values will be scaled by the square of the norm of the input vectors.
Warning
Even though the math shows that the centering term should be multiplied by \(r^2\), in practice this doesn’t work well and instead setting the centering term to \(\frac{1}{2} ||D_{km}||^2_2\) works much better. This non-\(r^2\) scaling is used here.
See also
ExpectationValueSumA similar module that computes the sum of expectation values (instead of transition amplitudes) of trainable observables.
LowRankTransitionAmplitudeSumA similar module that uses low-rank observables to reduce the number of trainable parameters.
- Parameters:
- __init__(num_observables=None, output_size=None, Ds=None, init_magnitude=0.01, centered=True)[source]#
Initialize the module.
- Parameters:
num_observables (int | None) – Number of observable matrices, shorthand \(l\).
output_size (int | None) – Number of output features, shorthand \(q\).
Ds (Inexact[Array, 'q l n n'] | None) – Optional 4D array of matrices \(D_{ql}\) that define the observables. Each \(D\) must be Hermitian. If not provided, the observables will be randomly initialized when the module is compiled.
init_magnitude (float) – Initial magnitude for the random matrices if Ms is not provided. Default
1e-2.centered (bool) – Whether to center the output by subtracting half the operator norm squared of each observable. Default
True.
- Return type:
None
Call the module with the current parameters and given input, state, and rng.
Returns a
jax.jit-able andjax.grad-able callable that represents the module's forward pass.Convenience wrapper to set_precision using the dtype argument, returns self.
Compile the module to be used with the given input shape.
Deserialize the module from a dictionary.
Get the hyperparameters of the module.
Returns the number of trainable floats in the module.
Get the output shape of the module given the input shape.
Get the current trainable parameters of the module.
Get the current state of the module.
Return True if the module is initialized and ready for training or inference.
Serialize the module to a dictionary.
Set the hyperparameters of the module.
Set the trainable parameters of the module.
Set the precision of the module parameters and state.
Set the state of the module.
Returns the name of the module, unless overridden, this is the class name.
- __call__(data, /, *, training=False, state=(), rng=None)#
Call the module with the current parameters and given input, state, and rng.
- Parameters:
data (pmm.typing.Data) – PyTree of input arrays of shape (num_samples, …). Only the first dimension (num_samples) is guaranteed to be the same for all input arrays.
training (bool) – Whether the module is in training mode, by default False.
state (pmm.typing.State) – State of the module, by default
().rng (Any) – JAX random key, by default None.
- Returns:
Output array of shape (num_samples, num_output_features) and new
state.
- Raises:
ValueError – If the module is not ready (i.e., compile() has not been called).
- Return type:
tuple[TypeAliasForwardRef(’pmm.typing.Data’), TypeAliasForwardRef(’pmm.typing.State’)]
See also
_get_callableReturns a callable that can be used to compute the output and new state given the parameters, input, training flag, state, and rng.
ParamsTyping for the module parameters.
DataTyping for the input and output data.
StateTyping for the module state.
- _get_callable()[source]#
Returns a
jax.jit-able andjax.grad-able callable that represents the module’s forward pass.This method must be implemented by all subclasses and must return a
jax-jit-able andjax-grad-able callable in the form ofmodule_callable( params: parametricmatrixmodels.typing.Params, data: parametricmatrixmodels.typing.Data, training: bool, state: parametricmatrixmodels.typing.State, rng: Any, ) -> ( output: parametricmatrixmodels.typing.Data, new_state: parametricmatrixmodels.typing.State, )
That is, all hyperparameters are traced out and the callable depends explicitly only on
the module’s parameters, as a PyTree with leaf nodes as JAX arrays,
- the input data, as a PyTree with leaf nodes as JAX arrays, each of
which has shape (num_samples, …),
the training flag, as a boolean,
the module’s state, as a PyTree with leaf nodes as JAX arrays
and returns
- the output data, as a PyTree with leaf nodes as JAX arrays, each of
which has shape (num_samples, …),
- the new module state, as a PyTree with leaf nodes as JAX arrays. The
PyTree structure must match that of the input state and additionally all leaf nodes must have the same shape as the input state leaf nodes.
The training flag will be traced out, so it doesn’t need to be jittable
- Returns:
A callable that takes the module’s parameters, input data,
training flag, state, and rng key and returns the output data and
new state.
- Raises:
NotImplementedError – If the method is not implemented in the subclass.
- Return type:
pmm.typing.ModuleCallable
See also
__call__Calls the module with the current parameters and given input, state, and rng.
ModuleCallableTyping for the callable returned by this method.
ParamsTyping for the module parameters.
DataTyping for the input and output data.
StateTyping for the module state.
- astype(dtype, /)#
Convenience wrapper to set_precision using the dtype argument, returns self.
- Parameters:
dtype (str | type[Any] | dtype | SupportsDType) – Precision to set for the module parameters. Valid options are: For 32-bit precision (all options are equivalent)
np.float32,np.complex64,"float32","complex64","single","f32","c64",32For 64-bit precision (all options are equivalent)np.float64,np.complex128,"float64","complex128","double","f64","c128",64- Returns:
BaseModule – The module itself, with updated precision.
- Raises:
ValueError – If the precision is invalid or if 64-bit precision is requested but
JAX_ENABLE_X64is not set.RuntimeError – If the module is not ready (i.e., compile() has not been called).
- Return type:
See also
set_precisionSets the precision of the module parameters and state.
- compile(rng, input_shape)[source]#
Compile the module to be used with the given input shape.
This method initializes the module’s parameters and state based on the input shape and random key.
This is needed since
Models are built before the input data is given, so before training or inference can be done, the module needs to be compiled and each module passes its output shape to the next module’scompilemethod.The RNG key is used to initialize random parameters, if needed.
This is not used to trace or jit the module’s callable, that is done automatically later.
- Parameters:
rng (Any) – JAX random key.
input_shape (pmm.typing.DataShape) – PyTree of input shape tuples, e.g.
((num_features,),), to compile the module for. All data passed to the module later must have the same PyTree structure and shape in all leaf array dimensions except the leading batch dimension.
- Raises:
NotImplementedError – If the method is not implemented in the subclass.
- Return type:
None
See also
DataShapeTyping for the input shape.
get_output_shapeGet the output shape of the module
- deserialize(data, /)#
Deserialize the module from a dictionary.
This method sets the module’s parameters and state based on the provided dictionary.
The default implementation expects the dictionary to contain the module’s name, trainable parameters, and state.
- Parameters:
data (dict[str, Any]) – Dictionary containing the serialized module data.
- Raises:
ValueError – If the serialized data does not contain the expected keys or if the version of the serialized data is not compatible with with the current package version.
- Return type:
None
- get_hyperparameters()[source]#
Get the hyperparameters of the module.
Hyperparameters are used to configure the module and are not trainable. They can be set via set_hyperparameters.
- Returns:
Dictionary containing the hyperparameters of the module.
- Return type:
pmm.typing.HyperParams
See also
set_hyperparametersSet the hyperparameters of the module.
HyperParamsTyping for the hyperparameters. Simply an alias for Dict[str, Any].
- get_num_trainable_floats()[source]#
Returns the number of trainable floats in the module. If the module does not have trainable parameters, returns
0. If the module is not ready, returnsNone.- Returns:
Number of trainable floats in the module, or None if the module
is not ready.
- Return type:
int | None
- get_output_shape(input_shape)[source]#
Get the output shape of the module given the input shape.
- Parameters:
input_shape (pmm.typing.DataShape) – PyTree of input shape tuples, e.g.
((num_features,),), to get the output shape for.- Returns:
PyTree of output shape tuples, e.g.
((num_output_features,),),corresponding to the output shape of the module for the given
input shape.
- Raises:
NotImplementedError – If the method is not implemented in the subclass.
- Return type:
pmm.typing.DataShape
See also
DataShapeTyping for the input and output shape.
- get_params()[source]#
Get the current trainable parameters of the module. If the module has no trainable parameters, this method should return an empty tuple.
- Returns:
PyTree with leaf nodes as JAX arrays representing the module’s
trainable parameters.
- Raises:
NotImplementedError – If the method is not implemented in the subclass.
- Return type:
pmm.typing.Params
See also
set_paramsSet the trainable parameters of the module.
ParamsTyping for the module parameters.
- get_state()#
Get the current state of the module.
States are used to store “memory” or other information that is not passed between modules, is not trainable, but may be updated during either training or inference. e.g. batch normalization state.
The state is optional, in which case this method should return the empty tuple.
- Returns:
PyTree with leaf nodes as JAX arrays representing the module’s
state.
- Return type:
pmm.typing.State
See also
set_stateSet the state of the module.
StateTyping for the module state.
- is_ready()[source]#
Return True if the module is initialized and ready for training or inference.
- Returns:
Trueif the module is ready,Falseotherwise.- Raises:
NotImplementedError – If the method is not implemented in the subclass.
- Return type:
- property name: str#
Returns the name of the module, unless overridden, this is the class name.
- Returns:
Name of the module.
- serialize()#
Serialize the module to a dictionary.
This method returns a dictionary representation of the module, including its parameters and state.
The default implementation serializes the module’s name, hyperparameters, trainable parameters, and state via a simple dictionary.
This only works if the module’s hyperparameters are auto-serializable. This includes basic types as well as numpy arrays.
- set_hyperparameters(hyperparams)[source]#
Set the hyperparameters of the module.
Hyperparameters are used to configure the module and are not trainable. They can be set via this method.
The default implementation uses setattr to set the hyperparameters as attributes of the class instance.
- Parameters:
hyperparameters – Dictionary containing the hyperparameters to set.
hyperparams (pmm.typing.HyperParams)
- Raises:
TypeError – If hyperparameters is not a dictionary.
- Return type:
None
See also
get_hyperparametersGet the hyperparameters of the module.
HyperParamsTyping for the hyperparameters. Simply an alias for Dict[str, Any].
- set_params(params)[source]#
Set the trainable parameters of the module.
- Parameters:
params (pmm.typing.Params) – PyTree with leaf nodes as JAX arrays representing the new trainable parameters of the module.
- Raises:
NotImplementedError – If the method is not implemented in the subclass.
- Return type:
None
See also
get_paramsGet the trainable parameters of the module.
ParamsTyping for the module parameters.
- set_precision(prec, /)#
Set the precision of the module parameters and state.
- Parameters:
prec (Any | str | int) – Precision to set for the module parameters. Valid options are: For 32-bit precision (all options are equivalent)
np.float32,np.complex64,"float32","complex64","single","f32","c64",32. For 64-bit precision (all options are equivalent)np.float64,np.complex128,"float64","complex128","double","f64","c128",64.- Raises:
ValueError – If the precision is invalid or if 64-bit precision is requested but
JAX_ENABLE_X64is not set.RuntimeError – If the module is not ready (i.e., compile() has not been called).
- Return type:
None
See also
astypeConvenience wrapper to set_precision using the dtype argument, returns self.