HEROS in ATOMIQ¶
Atomiq is perfectly suited to describe and control the realtime-hardware in the setup. However, a good deal of the hardware orchestration concerns non-realtime devices that are not directly connected to the realtime control system but rather have to be controlled through a vast variety of different interfaces. To make all these devices available in the atomiq experiments, the HEROS framework is used. The idea behind HEROS is that every hardware object becomes a software object that can be transparently accessed via the network.
To seamlessly interact with non-realtime hardware and to make atomiq a part of the ecosystem, HEROS is fully integrated into atomiq and atomiq itself is integrated into the HEROS network.
Note
HEROS is a peer-to-peer network and does not require a central server. Rather it uses UDP broadcasts to discover other participants in the network. It will thus automagically work within an unrestricted network segment. Note however that services running in a docker container can not send or receive UDP broadcasts from or to the host network. In such cases the container (HERO and or atomiq) can be configured to use the host network or, alternatively, a zenoh router can be used to broker the discovery (only the discovery, not the communication!).
Using HEROs in the Atomiq experiment¶
Using an available HERO in your atomiq experiment is a easy as putting it's name to your components list. Let's assume
there is a dummy voltage source available as a HERO with the name rfsource_dummy
in the default HEROS realm heros
.
By prefixing the HERO name with the magic identifier "$" you can state in your components list that the experiment
requires the HERO like shown below
from atomiq import AtomiqExperiment
class HerosTest(AtomiqExperiment):
components = ["herosink", "$rfsource_dummy"]
@kernel
def step(self, point):
self.rfsource_dummy.set_amplitude(0.56)
Note
If your hero is not in the default realm, you can use the syntax $realm/hero_name
to reference the hero. If doing
so in the components list, the hero becomes available as self.realm_hero_name
Any HERO imported this way inherits from Component
inside ATOMIQ. However, if the HERO specifies in it's metadata
that it is compatible with a more specific class inside atomiq.components then it is based off this more specific class
and atomiq assumes that the necessary interface exists in the HERO. This way, it is possible to have non-realtime
devices like RF sources, voltage and current sources, ADCs, simples switches, etc. look exactly like their realtime
counterparts from a software perspective. Inside atomiq, you can use all methods, attributes and events that the HERO
exposes. If your HERO carries proper type annotations, you can also access the HERO from within kernel code sections
(of course this then issues an rpc call into the artiq master who then does the calling of the HERO).
For more details on how to make your device a HERO and how the definition of the atomiq interfaces in HEROS work, check the documentation of HEROS and herosdevices.
If your HERO implements an ATOMIQ component interface, it can be used as any other component of that type. That allows
also reference it in the component definition in the very same way. For example an AOM that is driven by an RF source
which is controlled by a HERO rfsource_dummy
which implements atomiq.components.electronics.rfsource.RFSource could
look like the following in your components definition
{
"aom_cooler": {
"classname": "atomiq.components.optoelectronics.lightmodulator.AOM",
"arguments": {
"rfsource": "$rfsource_dummy",
"switch": "$rfsource_dummy",
"center_freq": 80e6,
"bandwidth": 40e6,
"switching_delay": 30e-9,
"order": 1,
"passes": 2
}
}
}
ATOMIQ becoming a HERO¶
ATOMIQ comes with it's own frontend to start the artiq master in atomiq.frontend.atomiq_master
. Using this starter
allows to modify artiq such that the artiq scheduler becomes itself a HERO. This allows to have a HERO for managing the
execution of experiments. In turn this allows to have external logic acting on your experiment-analysis-experiment
cycle. Re-scheduling an experiment with different parameters or scheduling a completely different experiment depending
on the outcome of a previous experiment or the computation result of some other HERO (think AI) becomes easily possible
with this feature.
Additionally, every scheduled and running experiments are HEROs as well. This allows you to interact with the living experiment from remote. You can readout important internal variables (step_counter, rid, etc) and request termination, or add some custom functions. These experiment HEROs also provide an event emit_data that can be used to broadcast data out of your experiment, as described in the following section.
The scheduler HERO triggers an event run_started
when a new experiment enters the run phase and an event
run_ended
when the run phase finished.The metadata transmitted with this event contains all information to
get the HERO of the just starting experiment.
Using the Experiment HERO as Data Sink¶
with the following in your atomiq components dict:
"herosink": {
"classname": "atomiq.components.basics.datasink.HEROSink"
}
broadcasting your data becomes as easy as
from atomiq import AtomiqExperiment
class HerosTest(AtomiqExperiment):
components = ["herosink"]
@kernel
def step(self, point):
# emit env data (like run ids, arguments for the current step, etc.)
self.herosink.submit_env(point)
# emit custom data
self.herosink.submit_data(["my_variable", "your_variable"], [30.21, -13.0])
To access the data put into the herosink, you can get a reference to the experiment HERO and connect your analysis or processing method to the event emit_data. This could look as the following:
from heros import RemoteHERO
def process_and_print(data):
# processing...
print(data)
experiment = RemoteHERO("atomiq_experiment_class_name-rid")
experiment.emit_data.connect(process_and_print)