Advanced Olfaction¶
Arenas¶
- class flygym.examples.vision.arena.ObstacleOdorArena(terrain: BaseArena, obstacle_positions: ndarray = np.array([(7.5, 0), (12.5, 5), (17.5, -5)]), obstacle_colors: ndarray | tuple = (0, 0, 0, 1), obstacle_radius: float = 1, obstacle_height: float = 4, odor_source: ndarray = np.array([[25, 0, 2]]), peak_odor_intensity: ndarray = np.array([[1]]), diffuse_func: Callable = lambda x: ..., marker_colors: list[tuple[float, float, float, float]] | None = None, marker_size: float = 0.1, user_camera_settings: tuple[tuple[float, float, float], tuple[float, float, float], float] | None = None)
Bases:
BaseArena
- friction = (100.0, 0.005, 0.0001)
- get_olfaction(antennae_pos: ndarray) ndarray
Get the odor intensity readings from the environment.
- Parameters:
- sensor_posnp.ndarray
The Cartesian coordinates of the antennae of the fly as a (n, 3) NumPy array where n is the number of sensors (usually n=4: 2 antennae + 2 maxillary palps), and the second dimension gives the coordinates in (x, y, z).
- Returns:
- np.ndarray
The odor intensity readings from the environment as a (k, n) NumPy array where k is the dimension of the odor signal and n is the number of odor sensors (usually n=4: 2 antennae + 2 maxillary palps).
- get_spawn_position(rel_pos: ndarray, rel_angle: ndarray) tuple[ndarray, ndarray]
Given a relative entity spawn position and orientation (as if it was a simple flat terrain), return the adjusted position and orientation. This is useful for environments that have complex terrain (e.g. with obstacles) where the entity’s spawn position needs to be shifted accordingly.
For example, if the arena has flat terrain, this method can simply return
rel_pos
,rel_angle
unchanged (as is the case by default). If there is are features on the ground that are 0.1 mm in height, then this method should returnrel_pos + [0, 0, 0.1], rel_angle
.- Parameters:
- rel_posnp.ndarray
(x, y, z) position of the entity in mm as supplied by the user (before any transformation).
- rel_anglenp.ndarray
Euler angle (rotation along x, y, z in radian) of the fly’s orientation as supplied by the user (before any transformation).
- Returns:
- np.ndarray
Adjusted (x, y, z) position of the entity.
- np.ndarray
Adjusted euler angles (rotations along x, y, z in radian) of the fly’s orientation.
- init_lights()
- num_sensors = 4
- property odor_dimensions: int
The dimension of the odor signal. This can be used to emulate multiple monomolecular chemical concentrations or multiple composite odor intensities.
- Returns:
- int
The dimension of the odor space.
- post_visual_render_hook(physics)
Make necessary changes (e.g. make certain visualization markers opaque) after rendering the visual inputs. By default, this does nothing.
- pre_visual_render_hook(physics)
Make necessary changes (e.g. make certain visualization markers transparent) before rendering the visual inputs. By default, this does nothing.
- spawn_entity(entity: Any, rel_pos: ndarray, rel_angle: ndarray) None
Add the fly to the arena.
- Parameters:
- entitymjcf.RootElement
The entity to be added to the arena (this should be the fly).
- rel_posnp.ndarray
(x, y, z) position of the entity.
- rel_anglenp.ndarray
euler angle representation (rot around x, y, z) of the entity’s orientation if it were spawned on a simple flat terrain.
- step(dt: float, physics: dm_control.mjcf.Physics, *args, **kwargs) None
Advance the arena by one step. This is useful for interactive environments (e.g. moving object). Typically, this method is called from the core simulation class (e.g.
NeuroMechFly
).- Parameters:
- dtfloat
The time step in seconds since the last update. Typically, this is the same as the time step of the physics simulation (provided that this method is called by the core simulation every time the simulation steps).
- physicsmjcf.Physics
The physics object of the simulation. This is typically provided by the core simulation class (e.g.
NeuroMechFly.physics
) when the core simulation calls this method.- *args
User defined arguments and keyword arguments.
- **kwargs
User defined arguments and keyword arguments.
- class flygym.examples.olfaction.OdorPlumeArena(plume_data_path: Path, dimension_scale_factor: float = 0.5, plume_simulation_fps: float = 200, intensity_scale_factor: float = 1.0, friction: tuple[float, float, float] = (1, 0.005, 0.0001), num_sensors: int = 4)¶
Bases:
BaseArena
This Arena class provides an interface to the separately simulated odor plume. The plume simulation is stored in an HDF5 file. In this class, we implement logics that calculate the intensity of the odor at the fly’s location at the correct time.
- Parameters:
- plume_data_pathPath
Path to the HDF5 file containing the plume simulation data.
- dimension_scale_factorfloat, optional
Scaling factor for the plume simulation grid. Each cell in the plume grid is this many millimeters in the simulation. By default 0.5.
- plume_simulation_fpsfloat, optional
Frame rate of the plume simulation. Each frame in the plume dataset is
1 / plume_simulation_fps
seconds in the physics simulation. By default 200.- intensity_scale_factorfloat, optional
Scaling factor for the intensity of the odor. By default 1.0.
- frictiontuple[float, float, float], optional
Friction parameters for the floor geom. By default (1, 0.005, 0.0001).
- num_sensorsint, optional
Number of olfactory sensors on the fly. By default 4.
- friction = (100.0, 0.005, 0.0001)¶
- get_olfaction(antennae_pos: ndarray) ndarray ¶
Returns the olfactory input for the given antennae positions. If the fly is outside the plume simulation grid, returns np.nan.
- get_position_mapping(sim: Simulation, camera_id: str = 'birdeye_cam') ndarray ¶
Get the display location (row-col coordinates) of each pixel on the fluid dynamics simulation.
- Parameters:
- simSimulation
Simulation simulation object.
- camera_idstr, optional
Camera to build position mapping for, by default “birdeye_cam”
- Returns:
- pos_display: np.ndarray
Array of shape (n_row_pxls_plume, n_col_pxls_plume, 2) containing the row-col coordinates of each plume simulation cell on the display image (in pixels).
- pos_physical: np.ndarray
Array of shape (n_row_pxls_plume, n_col_pxls_plume, 2) containing the row-col coordinates of each plume simulation cell on the physical simulated grid (in mm). This is a regular lattice grid marking the physical position of the centers of the fluid simulation cells.
- get_spawn_position(rel_pos: ndarray, rel_angle: ndarray) tuple[ndarray, ndarray] ¶
Given a relative entity spawn position and orientation (as if it was a simple flat terrain), return the adjusted position and orientation. This is useful for environments that have complex terrain (e.g. with obstacles) where the entity’s spawn position needs to be shifted accordingly.
For example, if the arena has flat terrain, this method can simply return
rel_pos
,rel_angle
unchanged (as is the case by default). If there is are features on the ground that are 0.1 mm in height, then this method should returnrel_pos + [0, 0, 0.1], rel_angle
.- Parameters:
- rel_posnp.ndarray
(x, y, z) position of the entity in mm as supplied by the user (before any transformation).
- rel_anglenp.ndarray
Euler angle (rotation along x, y, z in radian) of the fly’s orientation as supplied by the user (before any transformation).
- Returns:
- np.ndarray
Adjusted (x, y, z) position of the entity.
- np.ndarray
Adjusted euler angles (rotations along x, y, z in radian) of the fly’s orientation.
- init_lights()¶
- property odor_dimensions: int¶
The dimension of the odor signal. This can be used to emulate multiple monomolecular chemical concentrations or multiple composite odor intensities.
- Returns:
- int
The dimension of the odor space.
- post_visual_render_hook(physics: dm_control.mjcf.Physics, *args, **kwargs) None ¶
Make necessary changes (e.g. make certain visualization markers opaque) after rendering the visual inputs. By default, this does nothing.
- pre_visual_render_hook(physics: dm_control.mjcf.Physics, *args, **kwargs) None ¶
Make necessary changes (e.g. make certain visualization markers transparent) before rendering the visual inputs. By default, this does nothing.
- spawn_entity(entity: Any, rel_pos: ndarray, rel_angle: ndarray) None ¶
Add the fly to the arena.
- Parameters:
- entitymjcf.RootElement
The entity to be added to the arena (this should be the fly).
- rel_posnp.ndarray
(x, y, z) position of the entity.
- rel_anglenp.ndarray
euler angle representation (rot around x, y, z) of the entity’s orientation if it were spawned on a simple flat terrain.
- step(dt: float, physics: dm_control.mjcf.Physics = None, *args, **kwargs) None ¶
Advance the arena by one step. This is useful for interactive environments (e.g. moving object). Typically, this method is called from the core simulation class (e.g.
NeuroMechFly
).- Parameters:
- dtfloat
The time step in seconds since the last update. Typically, this is the same as the time step of the physics simulation (provided that this method is called by the core simulation every time the simulation steps).
- physicsmjcf.Physics
The physics object of the simulation. This is typically provided by the core simulation class (e.g.
NeuroMechFly.physics
) when the core simulation calls this method.- *args
User defined arguments and keyword arguments.
- **kwargs
User defined arguments and keyword arguments.
Tracking complex plumes¶
Bases:
HybridTurningController
A wrapper around the
HybridTurningController
that implements logics and utilities related to plume tracking such as overlaying the plume on the rendered images. It also checks if the fly is within the plume simulation grid and truncates the simulation accordingly.- Parameters:
- fly: Fly
The fly object to be used. See
flygym.example.locomotion.HybridTurningController
.- arena: OdorPlumeArena
The odor plume arena object to be used. Initialize it before creating the
PlumeNavigationTask
object.- render_plume_alphafloat
The transparency of the plume overlay on the rendered images.
- intensity_display_vmaxfloat
The maximum intensity value to be displayed on the rendered images.
Notes
Please refer to the “MPD Task Specifications” page of the API references for the detailed specifications of the action space, the observation space, the reward, the “terminated” and “truncated” flags, and the “info” dictionary.
Close the simulation, save data, and release any resources.
Gets the attribute name from the environment.
Checks if the attribute name exists in the environment.
Returns the environment’s internal
_np_random
that if not set will initialise with a random seed.- Returns:
Instances of np.random.Generator
Returns the environment’s internal
_np_random_seed
that if not set will first initialise with a random int as seed.If
np_random_seed
was set directly instead of throughreset()
orset_np_random_through_seed()
, the seed will take the value -1.- Returns:
int: the seed of the current np_random or -1, if the seed of the rng is unknown
Compute the render frames as specified by
render_mode
during the initialization of the environment.The environment’s
metadata
render modes (env.metadata[“render_modes”]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.- Note:
As the
render_mode
is known during__init__
, the objects used to render the environment state should be initialised in__init__
.
By convention, if the
render_mode
is:None (default): no render is computed.
“human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during
step()
andrender()
doesn’t need to be called. ReturnsNone
.“rgb_array”: Return a single frame representing the current state of the environment. A frame is a
np.ndarray
with shape(x, y, 3)
representing RGB values for an x-by-y pixel image.“ansi”: Return a strings (
str
) orStringIO.StringIO
containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).“rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper,
gymnasium.wrappers.RenderCollection
that is automatically applied duringgymnasium.make(..., render_mode="rgb_array_list")
. The frames collected are popped afterrender()
is called orreset()
.
- Note:
Make sure that your class’s
metadata
"render_modes"
key includes the list of supported modes.
Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e.,
gymnasium.make("CartPole-v1", render_mode="human")
Reset the simulation.
- Parameters:
- seedint, optional
Seed for the random number generator. If None, the simulation is re-seeded without a specific seed. For reproducibility, always specify a seed.
- init_phasesnp.ndarray, optional
Initial phases of the CPGs. See
CPGNetwork
for details.- init_magnitudesnp.ndarray, optional
Initial magnitudes of the CPGs. See
CPGNetwork
for details.- **kwargs
Additional keyword arguments to be passed to
SingleFlySimulation.reset
.
- Returns:
- np.ndarray
Initial observation upon reset.
- dict
Additional information.
Set the slope of the simulation environment and modify the camera orientation so that gravity is always pointing down. Changing the gravity vector might be useful during climbing simulations. The change in the camera angle has been extensively tested for the simple cameras (left, right, top, bottom, front, back) but not for the composed ones.
- Parameters:
- slopefloat
The desired_slope of the environment in degrees.
- rot_axisstr, optional
The axis about which the slope is applied, by default “y”.
Sets the attribute name on the environment with value.
Step the simulation forward one timestep.
- Parameters:
- actionnp.ndarray
Array of shape (2,) containing descending signal encoding turning.
Returns the base non-wrapped environment.
- Returns:
Env: The base non-wrapped
gymnasium.Env
instance
- class flygym.examples.olfaction.WalkingState(value, names=_not_given, *values, module=None, qualname=None, type=None, start=1, boundary=None)¶
Bases:
Enum
- FORWARD = (0, 'forward')¶
- STOP = (3, 'stop')¶
- TURN_LEFT = (1, 'left turn')¶
- TURN_RIGHT = (2, 'right turn')¶
Bases:
object
This class implements the plume navigation controller described in Demir et al., 2020. The controller decides the fly’s walking state based on the encounters with the plume. The controller has three states: forward walking, turning, and stopping. The transition among these states are governed by Poisson processes with encounter-dependent rates.
- Parameters:
- dtfloat
Time step of the physics simulation in seconds.
- forward_dn_drivetuple[float, float]
Drive values for forward walking.
- left_turn_dn_drivetuple[float, float]
Drive values for left turn.
- right_turn_dn_drivetuple[float, float]
Drive values for right turn.
- stop_dn_drivetuple[float, float]
Drive values for stopping.
- turn_durationfloat
Duration of the turn in seconds.
- lambda_ws_0float
Baseline rate of transition from walking to stopping.
- delta_lambda_wsfloat
Change in the rate of transition from walking to stopping after an encounter.
- tau_sfloat
Time constant for the transition from walking to stopping.
- alphafloat
Parameter for the sigmoid function that determines the turning direction.
- tau_freq_convfloat
Time constant for the exponential kernel that convolves the encounter history to determine the turning direction.
- cumulative_evidence_windowfloat
Window size for the cumulative evidence of the encounter history. In other words, encounters more than this many seconds ago are ignored.
- lambda_sw_0float
Baseline rate of transition from stopping to walking.
- delta_lambda_swfloat
Maximum change in the rate of transition from stopping to walking after an encounter.
- tau_wfloat
Time constant for evidence accumulation for the transition from stopping to walking.
- lambda_turnfloat
Poisson rate for turning.
- random_seedint
Random seed.
Notes
See Demir et al., 2020 for details.
Decide the fly’s walking state based on the encounter information. If the next state is turning, the turning direction is further determined based on the encounter frequency and the fly’s current heading (upwind or downwind).
In case the exponential kernel is truncated to a finite length, this method computes a scaler k(w) that correct the underestimation of the integrated value:
\[k(w) = \frac{\int_{-\infty}^0 e^{t / \tau} dt} {\int_{-w}^0 e^{t / \tau} dt} = \frac{1}{1 - e^{-w/\tau}}\]- Parameters:
- windowfloat
Window size for cumulative evidence in seconds.
- taufloat
Time scale for the exponential kernel.
- Returns:
- float
The correction factor.