Advanced Olfaction

Arenas

class flygym.examples.vision.arena.ObstacleOdorArena(terrain: BaseArena, obstacle_positions: ndarray = np.array([(7.5, 0), (12.5, 5), (17.5, -5)]), obstacle_colors: ndarray | Tuple = (0, 0, 0, 1), obstacle_radius: float = 1, obstacle_height: float = 4, odor_source: ndarray = np.array([[25, 0, 2]]), peak_odor_intensity: ndarray = np.array([[1]]), diffuse_func: Callable = lambda x: ..., marker_colors: List[Tuple[float, float, float, float]] | None = None, marker_size: float = 0.1, user_camera_settings: Tuple[Tuple[float, float, float], Tuple[float, float, float], float] | None = None)

Bases: BaseArena

friction = (100.0, 0.005, 0.0001)
get_olfaction(antennae_pos: ndarray) ndarray

Get the odor intensity readings from the environment.

Parameters:
sensor_posnp.ndarray

The Cartesian coordinates of the antennae of the fly as a (n, 3) NumPy array where n is the number of sensors (usually n=4: 2 antennae + 2 maxillary palps), and the second dimension gives the coordinates in (x, y, z).

Returns:
np.ndarray

The odor intensity readings from the environment as a (k, n) NumPy array where k is the dimension of the odor signal and n is the number of odor sensors (usually n=4: 2 antennae + 2 maxillary palps).

get_spawn_position(rel_pos: ndarray, rel_angle: ndarray) Tuple[ndarray, ndarray]

Given a relative entity spawn position and orientation (as if it was a simple flat terrain), return the adjusted position and orientation. This is useful for environments that have complex terrain (e.g. with obstacles) where the entity’s spawn position needs to be shifted accordingly.

For example, if the arena has flat terrain, this method can simply return rel_pos, rel_angle unchanged (as is the case by default). If there is are features on the ground that are 0.1 mm in height, then this method should return rel_pos + [0, 0, 0.1], rel_angle.

Parameters:
rel_posnp.ndarray

(x, y, z) position of the entity in mm as supplied by the user (before any transformation).

rel_anglenp.ndarray

Euler angle (rotation along x, y, z in radian) of the fly’s orientation as supplied by the user (before any transformation).

Returns:
np.ndarray

Adjusted (x, y, z) position of the entity.

np.ndarray

Adjusted euler angles (rotations along x, y, z in radian) of the fly’s orientation.

init_lights()
num_sensors = 4
property odor_dimensions: int

The dimension of the odor signal. This can be used to emulate multiple monomolecular chemical concentrations or multiple composite odor intensities.

Returns:
int

The dimension of the odor space.

post_visual_render_hook(physics)

Make necessary changes (e.g. make certain visualization markers opaque) after rendering the visual inputs. By default, this does nothing.

pre_visual_render_hook(physics)

Make necessary changes (e.g. make certain visualization markers transparent) before rendering the visual inputs. By default, this does nothing.

spawn_entity(entity: Any, rel_pos: ndarray, rel_angle: ndarray) None

Add the fly to the arena.

Parameters:
entitymjcf.RootElement

The entity to be added to the arena (this should be the fly).

rel_posnp.ndarray

(x, y, z) position of the entity.

rel_anglenp.ndarray

euler angle representation (rot around x, y, z) of the entity’s orientation if it were spawned on a simple flat terrain.

step(dt: float, physics: dm_control.mjcf.Physics, *args, **kwargs) None

Advance the arena by one step. This is useful for interactive environments (e.g. moving object). Typically, this method is called from the core simulation class (e.g. NeuroMechFly).

Parameters:
dtfloat

The time step in seconds since the last update. Typically, this is the same as the time step of the physics simulation (provided that this method is called by the core simulation every time the simulation steps).

physicsmjcf.Physics

The physics object of the simulation. This is typically provided by the core simulation class (e.g. NeuroMechFly.physics) when the core simulation calls this method.

*args

User defined arguments and keyword arguments.

**kwargs

User defined arguments and keyword arguments.

class flygym.examples.olfaction.OdorPlumeArena(plume_data_path: Path, dimension_scale_factor: float = 0.5, plume_simulation_fps: float = 200, intensity_scale_factor: float = 1.0, friction: Tuple[float, float, float] = (1, 0.005, 0.0001), num_sensors: int = 4)

Bases: BaseArena

This Arena class provides an interface to the separately simulated odor plume. The plume simulation is stored in an HDF5 file. In this class, we implement logics that calculate the intensity of the odor at the fly’s location at the correct time.

friction = (100.0, 0.005, 0.0001)
get_olfaction(antennae_pos: ndarray) ndarray

Returns the olfactory input for the given antennae positions. If the fly is outside the plume simulation grid, returns np.nan.

get_position_mapping(sim: Simulation, camera_id: str = 'birdeye_cam') ndarray

Get the display location (row-col coordinates) of each pixel on the fluid dynamics simulation.

Parameters:
simSimulation

Simulation simulation object.

camera_idstr, optional

Camera to build position mapping for, by default “birdeye_cam”

Returns:
pos_display: np.ndarray

Array of shape (n_row_pxls_plume, n_col_pxls_plume, 2) containing the row-col coordinates of each plume simulation cell on the display image (in pixels).

pos_physical: np.ndarray

Array of shape (n_row_pxls_plume, n_col_pxls_plume, 2) containing the row-col coordinates of each plume simulation cell on the physical simulated grid (in mm). This is a regular lattice grid marking the physical position of the centers of the fluid simulation cells.

get_spawn_position(rel_pos: ndarray, rel_angle: ndarray) Tuple[ndarray, ndarray]

Given a relative entity spawn position and orientation (as if it was a simple flat terrain), return the adjusted position and orientation. This is useful for environments that have complex terrain (e.g. with obstacles) where the entity’s spawn position needs to be shifted accordingly.

For example, if the arena has flat terrain, this method can simply return rel_pos, rel_angle unchanged (as is the case by default). If there is are features on the ground that are 0.1 mm in height, then this method should return rel_pos + [0, 0, 0.1], rel_angle.

Parameters:
rel_posnp.ndarray

(x, y, z) position of the entity in mm as supplied by the user (before any transformation).

rel_anglenp.ndarray

Euler angle (rotation along x, y, z in radian) of the fly’s orientation as supplied by the user (before any transformation).

Returns:
np.ndarray

Adjusted (x, y, z) position of the entity.

np.ndarray

Adjusted euler angles (rotations along x, y, z in radian) of the fly’s orientation.

init_lights()
property odor_dimensions: int

The dimension of the odor signal. This can be used to emulate multiple monomolecular chemical concentrations or multiple composite odor intensities.

Returns:
int

The dimension of the odor space.

post_visual_render_hook(physics: dm_control.mjcf.Physics, *args, **kwargs) None

Make necessary changes (e.g. make certain visualization markers opaque) after rendering the visual inputs. By default, this does nothing.

pre_visual_render_hook(physics: dm_control.mjcf.Physics, *args, **kwargs) None

Make necessary changes (e.g. make certain visualization markers transparent) before rendering the visual inputs. By default, this does nothing.

spawn_entity(entity: Any, rel_pos: ndarray, rel_angle: ndarray) None

Add the fly to the arena.

Parameters:
entitymjcf.RootElement

The entity to be added to the arena (this should be the fly).

rel_posnp.ndarray

(x, y, z) position of the entity.

rel_anglenp.ndarray

euler angle representation (rot around x, y, z) of the entity’s orientation if it were spawned on a simple flat terrain.

step(dt: float, physics: dm_control.mjcf.Physics = None, *args, **kwargs) None

Advance the arena by one step. This is useful for interactive environments (e.g. moving object). Typically, this method is called from the core simulation class (e.g. NeuroMechFly).

Parameters:
dtfloat

The time step in seconds since the last update. Typically, this is the same as the time step of the physics simulation (provided that this method is called by the core simulation every time the simulation steps).

physicsmjcf.Physics

The physics object of the simulation. This is typically provided by the core simulation class (e.g. NeuroMechFly.physics) when the core simulation calls this method.

*args

User defined arguments and keyword arguments.

**kwargs

User defined arguments and keyword arguments.

Tracking complex plumes

class flygym.examples.olfaction.PlumeNavigationTask(fly: Fly, arena: OdorPlumeArena, render_plume_alpha: float = 0.75, intensity_display_vmax: float = 1.0, **kwargs)

Bases: HybridTurningController

A wrapper around the HybridTurningController that implements logics and utilities related to plume tracking such as overlaying the plume on the rendered images. It also checks if the fly is within the plume simulation grid and truncates the simulation accordingly.

Notes

Please refer to the “MPD Task Specifications” page of the API references for the detailed specifications of the action space, the observation space, the reward, the “terminated” and “truncated” flags, and the “info” dictionary.

property action_space
close() None

Close the simulation, save data, and release any resources.

get_observation() ObsType
get_wrapper_attr(name: str) Any

Gets the attribute name from the environment.

property gravity
metadata: dict[str, Any] = {'render_modes': []}
property np_random: Generator

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

property observation_space
overlay_focused_plume(focus_img, t_idx)
render(*args, **kwargs)

Compute the render frames as specified by render_mode during the initialization of the environment.

The environment’s metadata render modes (env.metadata[“render_modes”]) should contain the possible ways to implement the render modes. In addition, list versions for most render modes is achieved through gymnasium.make which automatically applies a wrapper to collect rendered frames.

Note:

As the render_mode is known during __init__, the objects used to render the environment state should be initialised in __init__.

By convention, if the render_mode is:

  • None (default): no render is computed.

  • “human”: The environment is continuously rendered in the current display or terminal, usually for human consumption. This rendering should occur during step() and render() doesn’t need to be called. Returns None.

  • “rgb_array”: Return a single frame representing the current state of the environment. A frame is a np.ndarray with shape (x, y, 3) representing RGB values for an x-by-y pixel image.

  • “ansi”: Return a strings (str) or StringIO.StringIO containing a terminal-style text representation for each time step. The text can include newlines and ANSI escape sequences (e.g. for colors).

  • “rgb_array_list” and “ansi_list”: List based version of render modes are possible (except Human) through the wrapper, gymnasium.wrappers.RenderCollection that is automatically applied during gymnasium.make(..., render_mode="rgb_array_list"). The frames collected are popped after render() is called or reset().

Note:

Make sure that your class’s metadata "render_modes" key includes the list of supported modes.

Changed in version 0.25.0: The render function was changed to no longer accept parameters, rather these parameters should be specified in the environment initialised, i.e., gymnasium.make("CartPole-v1", render_mode="human")

render_mode: str | None = None
reset(seed=None, init_phases=None, init_magnitudes=None, **kwargs)

Reset the simulation.

Parameters:
seedint, optional

Seed for the random number generator. If None, the simulation is re-seeded without a specific seed. For reproducibility, always specify a seed.

init_phasesnp.ndarray, optional

Initial phases of the CPGs. See CPGNetwork for details.

init_magnitudesnp.ndarray, optional

Initial magnitudes of the CPGs. See CPGNetwork for details.

**kwargs

Additional keyword arguments to be passed to SingleFlySimulation.reset.

Returns:
np.ndarray

Initial observation upon reset.

dict

Additional information.

reward_range = (-inf, inf)
set_slope(slope: float, rot_axis='y')

Set the slope of the simulation environment and modify the camera orientation so that gravity is always pointing down. Changing the gravity vector might be useful during climbing simulations. The change in the camera angle has been extensively tested for the simple cameras (left, right, top, bottom, front, back) but not for the composed ones.

Parameters:
slopefloat

The desired_slope of the environment in degrees.

rot_axisstr, optional

The axis about which the slope is applied, by default “y”.

spec: EnvSpec | None = None
step(action)

Step the simulation forward one timestep.

Parameters:
actionnp.ndarray

Array of shape (2,) containing descending signal encoding turning.

property unwrapped: Env[ObsType, ActType]

Returns the base non-wrapped environment.

Returns:

Env: The base non-wrapped gymnasium.Env instance

class flygym.examples.olfaction.WalkingState(value, names=_not_given, *values, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: Enum

FORWARD = (0, 'forward')
STOP = (3, 'stop')
TURN_LEFT = (1, 'left turn')
TURN_RIGHT = (2, 'right turn')
class flygym.examples.olfaction.PlumeNavigationController(dt: float, forward_dn_drive: Tuple[float, float] = (1.0, 1.0), left_turn_dn_drive: Tuple[float, float] = (-0.4, 1.2), right_turn_dn_drive: Tuple[float, float] = (1.2, -0.4), stop_dn_drive: Tuple[float, float] = (0.0, 0.0), turn_duration: float = 0.25, lambda_ws_0: float = 0.78, delta_lambda_ws: float = -0.8, tau_s: float = 0.2, alpha: float = 0.8, tau_freq_conv: float = 2, cumulative_evidence_window: float = 2.0, lambda_sw_0: float = 0.5, delta_lambda_sw: float = 1, tau_w=0.52, lambda_turn: float = 1.33, random_seed: int = 0)

Bases: object

This class implements the plume navigation controller described in Demir et al., 2020. The controller decides the fly’s walking state based on the encounters with the plume. The controller has three states: forward walking, turning, and stopping. The transition among these states are governed by Poisson processes with encounter-dependent rates.

decide_state(encounter_flag: bool, fly_heading: ndarray)

Decide the fly’s walking state based on the encounter information. If the next state is turning, the turning direction is further determined based on the encounter frequency and the fly’s current heading (upwind or downwind).

exp_integral_norm_factor(window: float, tau: float)

In case the exponential kernel is truncated to a finite length, this method computes a scaler k(w) that correct the underestimation of the integrated value:

\[k(w) = \frac{\int_{-\infty}^0 e^{t / \tau} dt} {\int_{-w}^0 e^{t / \tau} dt} = \frac{1}{1 - e^{-w/\tau}}\]
Parameters:
windowfloat

Window size for cumulative evidence in seconds.

taufloat

Time scale for the exponential kernel.

Returns:
float

The correction factor.