atomiq.atomiq module

class atomiq.atomiq.AtomiqExperiment(managers_or_parent, name=None, arg_provider=None, component_map=None, *args, **kwargs)[source]

Bases: EnvExperiment

CHUNKSIZE = 10
components = ['log']
arg_provider = <atomiq.arguments.arguments.NativeArgumentProvider object>
prepare()[source]

Prepares components and structure. Called by ARTIQ in the prepare phase, see ARTIQ documentation for more information on experiment phases.

Note

If you overwrite this method in your experiment, make sure to call super().prepare()

build()[source]

Initializes arguements. Called by ARTIQ in the build phase, see ARTIQ documentation for more information on experiment phases.

Note

If you overwrite this method in your experiment, make sure to call super().build()

chunker(mult_scan, size=100)[source]

Generator to call a kernel with chunks of scan points.

Parameters:

size (artiq.compiler.types.TMono('int', OrderedDict([('width', artiq.compiler.types.TValue(32))])))

Return type:

TList

run()[source]

Run entry point for ARTIQ, see ARTIQ documentation for more information on experiment phases.

Warning

Do not implement this entry point in your experiment. Use the provided sub-phases (prerun, step, etc.) More information can be found in the Phases and Chunking documentation.

prerun()[source]

Kernel entry point, run once at the beginning of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prerun_host()[source]

Host entry point, run once at the beginning of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postrun()[source]

Kernel entry point, run once at the end of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postrun_host()[source]

Host entry point, run once at the end of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prestep(point)[source]

Kernel entry point, run before every step. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

poststep(point)[source]

Kernel entry point, run after every step. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prechunk(points)[source]

Kernel entry point, run once at the beginning of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postchunk(points)[source]

Kernel entry point, run once at the end of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prechunk_host(points)[source]

Host entry point, run once at the beginning of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postchunk_host(points)[source]

Host entry point, run once at the end of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

step(point)[source]

Kernel entry point, for the main experiment sequence code. This method must be overloaded by the user. Details can be found in the Phases and Chunking documentation.

analyze()

Entry point for analyzing the results of the experiment.

This method may be overloaded by the user to implement the analysis phase of the experiment, for example fitting curves.

Splitting this phase from run() enables tweaking the analysis algorithm on pre-existing data, and CPU-bound analyses to be run overlapped with the next experiment in a pipelined manner.

This method must not interact with the hardware.

append_to_dataset(key, value)

Append a value to a dataset.

The target dataset must be a list (i.e. support append()), and must have previously been set from this experiment.

The broadcast/persist/archive mode of the given key remains unchanged from when the dataset was last set. Appended values are transmitted efficiently as incremental modifications in broadcast mode.

call_child_method(method, *args, **kwargs)

Calls the named method for each child, if it exists for that child, in the order of registration.

Parameters:
  • method (str) -- Name of the method to call

  • args -- Tuple of positional arguments to pass to all children

  • kwargs -- Dict of keyword arguments to pass to all children

get_argument(key, processor, group=None, tooltip=None)

Retrieves and returns the value of an argument.

This function should only be called from build.

Parameters:
  • key -- Name of the argument.

  • processor -- A description of how to process the argument, such as instances of BooleanValue and NumberValue.

  • group -- An optional string that defines what group the argument belongs to, for user interface purposes.

  • tooltip -- An optional string to describe the argument in more detail, applied as a tooltip to the argument name in the user interface.

get_dataset(key, default=<class 'artiq.language.environment.NoDefault'>, archive=True)

Returns the contents of a dataset.

The local storage is searched first, followed by the master storage (which contains the broadcasted datasets from all experiments) if the key was not found initially.

If the dataset does not exist, returns the default value. If no default is provided, raises KeyError.

By default, datasets obtained by this method are archived into the output HDF5 file of the experiment. If an archived dataset is requested more than one time or is modified, only the value at the time of the first call is archived. This may impact reproducibility of experiments.

Parameters:

archive -- Set to False to prevent archival together with the run's results. Default is True.

get_dataset_metadata(key, default=<class 'artiq.language.environment.NoDefault'>)

Returns the metadata of a dataset.

Returns dictionary with items describing the dataset, including the units, scale and precision.

This function is used to get additional information for displaying the dataset.

See set_dataset() for documentation of metadata items.

get_device(key)

Creates and returns a device driver.

get_device_db()

Returns the full contents of the device database.

interactive(title='')

Request arguments from the user interactively.

This context manager returns a namespace object on which the method setattr_argument() should be called, with the usual semantics.

When the context manager terminates, the experiment is blocked and the user is presented with the requested argument widgets. After the user enters values, the experiment is resumed and the namespace contains the values of the arguments.

If the interactive arguments request is cancelled, raises CancelledArgsError.

mutate_dataset(key, index, value)

Mutate an existing dataset at the given index (e.g. set a value at a given position in a NumPy array)

If the dataset was created in broadcast mode, the modification is immediately transmitted.

If the index is a tuple of integers, it is interpreted as slice(*index). If the index is a tuple of tuples, each sub-tuple is interpreted as slice(*sub_tuple) (multi-dimensional slicing).

register_child(child)
set_dataset(key, value, *, unit=None, scale=None, precision=None, broadcast=False, persist=False, archive=True)

Sets the contents and handling modes of a dataset.

Datasets must be scalars (bool, int, float or NumPy scalar) or NumPy arrays.

Parameters:
  • unit -- A string representing the unit of the value.

  • scale -- A numerical factor that is used to adjust the value of the dataset to match the scale or units of the experiment's reference frame when the value is displayed.

  • precision -- The maximum number of digits to print after the decimal point. Set precision=None to print as many digits as necessary to uniquely specify the value. Uses IEEE unbiased rounding.

  • broadcast -- the data is sent in real-time to the master, which dispatches it.

  • persist -- the master should store the data on-disk. Implies broadcast.

  • archive -- the data is saved into the local storage of the current run (archived as a HDF5 file).

set_default_scheduling(priority=None, pipeline_name=None, flush=None)

Sets the default scheduling options.

This function should only be called from build.

setattr_argument(key, processor=None, group=None, tooltip=None)

Sets an argument as attribute. The names of the argument and of the attribute are the same.

The key is added to the instance's kernel invariants.

setattr_dataset(key, default=<class 'artiq.language.environment.NoDefault'>, archive=True)

Sets the contents of a dataset as attribute. The names of the dataset and of the attribute are the same.

setattr_device(key)

Sets a device driver as attribute. The names of the device driver and of the attribute are the same.

The key is added to the instance's kernel invariants.

class atomiq.atomiq.AtomiqBlock(*args, **kwargs)[source]

Bases: AtomiqExperiment

CHUNKSIZE = 10
analyze()

Entry point for analyzing the results of the experiment.

This method may be overloaded by the user to implement the analysis phase of the experiment, for example fitting curves.

Splitting this phase from run() enables tweaking the analysis algorithm on pre-existing data, and CPU-bound analyses to be run overlapped with the next experiment in a pipelined manner.

This method must not interact with the hardware.

append_to_dataset(key, value)

Append a value to a dataset.

The target dataset must be a list (i.e. support append()), and must have previously been set from this experiment.

The broadcast/persist/archive mode of the given key remains unchanged from when the dataset was last set. Appended values are transmitted efficiently as incremental modifications in broadcast mode.

arg_provider = <atomiq.arguments.arguments.NativeArgumentProvider object>
build()

Initializes arguements. Called by ARTIQ in the build phase, see ARTIQ documentation for more information on experiment phases.

Note

If you overwrite this method in your experiment, make sure to call super().build()

call_child_method(method, *args, **kwargs)

Calls the named method for each child, if it exists for that child, in the order of registration.

Parameters:
  • method (str) -- Name of the method to call

  • args -- Tuple of positional arguments to pass to all children

  • kwargs -- Dict of keyword arguments to pass to all children

chunker(mult_scan, size=100)

Generator to call a kernel with chunks of scan points.

Parameters:

size (artiq.compiler.types.TMono('int', OrderedDict([('width', artiq.compiler.types.TValue(32))])))

Return type:

TList

components = ['log']
get_argument(key, processor, group=None, tooltip=None)

Retrieves and returns the value of an argument.

This function should only be called from build.

Parameters:
  • key -- Name of the argument.

  • processor -- A description of how to process the argument, such as instances of BooleanValue and NumberValue.

  • group -- An optional string that defines what group the argument belongs to, for user interface purposes.

  • tooltip -- An optional string to describe the argument in more detail, applied as a tooltip to the argument name in the user interface.

get_dataset(key, default=<class 'artiq.language.environment.NoDefault'>, archive=True)

Returns the contents of a dataset.

The local storage is searched first, followed by the master storage (which contains the broadcasted datasets from all experiments) if the key was not found initially.

If the dataset does not exist, returns the default value. If no default is provided, raises KeyError.

By default, datasets obtained by this method are archived into the output HDF5 file of the experiment. If an archived dataset is requested more than one time or is modified, only the value at the time of the first call is archived. This may impact reproducibility of experiments.

Parameters:

archive -- Set to False to prevent archival together with the run's results. Default is True.

get_dataset_metadata(key, default=<class 'artiq.language.environment.NoDefault'>)

Returns the metadata of a dataset.

Returns dictionary with items describing the dataset, including the units, scale and precision.

This function is used to get additional information for displaying the dataset.

See set_dataset() for documentation of metadata items.

get_device(key)

Creates and returns a device driver.

get_device_db()

Returns the full contents of the device database.

interactive(title='')

Request arguments from the user interactively.

This context manager returns a namespace object on which the method setattr_argument() should be called, with the usual semantics.

When the context manager terminates, the experiment is blocked and the user is presented with the requested argument widgets. After the user enters values, the experiment is resumed and the namespace contains the values of the arguments.

If the interactive arguments request is cancelled, raises CancelledArgsError.

mutate_dataset(key, index, value)

Mutate an existing dataset at the given index (e.g. set a value at a given position in a NumPy array)

If the dataset was created in broadcast mode, the modification is immediately transmitted.

If the index is a tuple of integers, it is interpreted as slice(*index). If the index is a tuple of tuples, each sub-tuple is interpreted as slice(*sub_tuple) (multi-dimensional slicing).

postchunk(points)

Kernel entry point, run once at the end of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postchunk_host(points)

Host entry point, run once at the end of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postrun()

Kernel entry point, run once at the end of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

postrun_host()

Host entry point, run once at the end of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

poststep(point)

Kernel entry point, run after every step. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prechunk(points)

Kernel entry point, run once at the beginning of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prechunk_host(points)

Host entry point, run once at the beginning of a chunk. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prepare()

Prepares components and structure. Called by ARTIQ in the prepare phase, see ARTIQ documentation for more information on experiment phases.

Note

If you overwrite this method in your experiment, make sure to call super().prepare()

prerun()

Kernel entry point, run once at the beginning of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prerun_host()

Host entry point, run once at the beginning of the run phase of an experiment. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

prestep(point)

Kernel entry point, run before every step. This method can be overloaded by the user. Details can be found in the Phases and Chunking documentation.

register_child(child)
run()

Run entry point for ARTIQ, see ARTIQ documentation for more information on experiment phases.

Warning

Do not implement this entry point in your experiment. Use the provided sub-phases (prerun, step, etc.) More information can be found in the Phases and Chunking documentation.

set_dataset(key, value, *, unit=None, scale=None, precision=None, broadcast=False, persist=False, archive=True)

Sets the contents and handling modes of a dataset.

Datasets must be scalars (bool, int, float or NumPy scalar) or NumPy arrays.

Parameters:
  • unit -- A string representing the unit of the value.

  • scale -- A numerical factor that is used to adjust the value of the dataset to match the scale or units of the experiment's reference frame when the value is displayed.

  • precision -- The maximum number of digits to print after the decimal point. Set precision=None to print as many digits as necessary to uniquely specify the value. Uses IEEE unbiased rounding.

  • broadcast -- the data is sent in real-time to the master, which dispatches it.

  • persist -- the master should store the data on-disk. Implies broadcast.

  • archive -- the data is saved into the local storage of the current run (archived as a HDF5 file).

set_default_scheduling(priority=None, pipeline_name=None, flush=None)

Sets the default scheduling options.

This function should only be called from build.

setattr_argument(key, processor=None, group=None, tooltip=None)

Sets an argument as attribute. The names of the argument and of the attribute are the same.

The key is added to the instance's kernel invariants.

setattr_dataset(key, default=<class 'artiq.language.environment.NoDefault'>, archive=True)

Sets the contents of a dataset as attribute. The names of the dataset and of the attribute are the same.

setattr_device(key)

Sets a device driver as attribute. The names of the device driver and of the attribute are the same.

The key is added to the instance's kernel invariants.

step(point)

Kernel entry point, for the main experiment sequence code. This method must be overloaded by the user. Details can be found in the Phases and Chunking documentation.