Simulation#

The user interacts with the simulated fly via the NeuroMechFly class, which implements the Gym Env API.

class flygym.mujoco.NeuroMechFly(sim_params: Parameters | None = None, actuated_joints: List = preprogrammed.all_leg_dofs, contact_sensor_placements: List = preprogrammed.all_tarsi_links, output_dir: Path | None = None, arena: BaseArena | None = None, spawn_pos: Tuple[float, float, float] = (0.0, 0.0, 0.5), spawn_orientation: Tuple[float, float, float] = (0.0, 0.0, np.pi / 2), control: str = 'position', init_pose: str | KinematicPose = 'stretch', floor_collisions: str | List[str] = 'legs', self_collisions: str | List[str] = 'legs', detect_flip: bool = False)#

Bases: Env

A NeuroMechFly environment using MuJoCo as the physics engine.

Attributes:
sim_paramsflygym.mujoco.Parameters

Parameters of the MuJoCo simulation.

timestep: float

Simulation timestep in seconds.

output_dirPath

Directory to save simulation data.

arenaflygym.arena.BaseWorld

The arena in which the fly is placed.

spawn_posTuple[froot_elementloat, float, float], optional

The (x, y, z) position in the arena defining where the fly will be spawn, in mm.

spawn_orientationTuple[float, float, float, float], optional

The spawn orientation of the fly in the Euler angle format: (x, y, z), where x, y, z define the rotation around x, y and z in radian.

controlstr

The joint controller type. Can be “position”, “velocity”, or “torque”.

init_poseflygym.state.BaseState

Which initial pose to start the simulation from.

render_modestr

The rendering mode. Can be “saved” or “headless”.

actuated_jointsList[str]

List of names of actuated joints.

contact_sensor_placementsList[str]

List of body segments where contact sensors are placed. By default all tarsus segments.

detect_flipbool

If True, the simulation will indicate whether the fly has flipped in the info returned by .step(...). Flip detection is achieved by checking whether the leg tips are free of any contact for a duration defined in the configuration file. Flip detection is disabled for a period of time at the beginning of the simulation as defined in the configuration file. This avoid spurious detection when the fly is not standing reliably on the ground yet.

retinaflygym.mujoco.vision.Retina

The retina simulation object used to render the fly’s visual inputs.

arena_root = dm_control.mjcf.RootElement

The root element of the arena.

physics: dm_control.mjcf.Physics

The MuJoCo Physics object built from the arena’s MJCF model with the fly in it.

curr_timefloat

The (simulated) time elapsed since the last reset (in seconds).

action_spacegymnasium.core.ObsType

Definition of the simulation’s action space as a Gym environment.

observation_spacegymnasium.core.ObsType

Definition of the simulation’s observation space as a Gym environment.

modeldm_control.mjcf.RootElement

The MuJoCo model.

vision_update_masknp.ndarray

The refresh frequency of the visual system is often loser than the same as the physics simulation time step.

action_space: spaces.Space[ActType]#
change_segment_color(segment, color)#

Change the color of a segment of the fly.

Parameters:
segmentstr

The name of the segment to change the color of.

colorTuple[float, float, float, float]

Target color as RGBA values normalized to [0, 1].

close() None#

Close the environment, save data, and release any resources.

get_info()#

Any additional information that is not part of the observation. This method always returns an empty dictionary unless extended by the user.

Returns:
Dict[str, Any]

The dictionary containing additional information.

get_observation() Tuple[ObsType, Dict[str, Any]]#

Get observation without stepping the physics simulation.

Returns:
ObsType

The observation as defined by the environment.

get_reward()#

Get the reward for the current state of the environment. This method always returns 0 unless extended by the user.

Returns:
float

The reward.

get_wrapper_attr(name: str) Any#

Gets the attribute name from the environment.

is_terminated()#

Whether the episode has terminated due to factors that are defined within the Markov Decision Process (eg. task completion/ failure, etc). This method always returns False unless extended by the user.

Returns:
bool

Whether the simulation is terminated.

is_truncated()#
Whether the episode has terminated due to factors beyond the

Markov Decision Process (eg. time limit, etc). This method always returns False unless extended by the user.

Returns:
bool

Whether the simulation is truncated.

metadata: dict[str, Any] = {'render_modes': []}#
property np_random: Generator#

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

observation_space: spaces.Space[ObsType]#
render() ndarray | None#

Call the render method to update the renderer. It should be called every iteration; the method will decide by itself whether action is required.

Returns:
np.ndarray

The rendered image is one is rendered.

render_mode: str | None = None#
reset(*, seed: int | None = None, options: Dict | None = None) Tuple[ObsType, Dict[str, Any]]#

Reset the Gym environment.

Parameters:
seedint

Random seed for the environment. The provided base simulation is deterministic, so this does not have an effect unless extended by the user.

optionsDict

Additional parameter for the simulation. There is none in the provided base simulation, so this does not have an effect unless extended by the user.

Returns:
ObsType

The observation as defined by the environment.

Dict[str, Any]

Any additional information that is not part of the observation. This is an empty dictionary by default but the user can override this method to return additional information.

reward_range = (-inf, inf)#
save_video(path: Path, stabilization_time=0.02)#

Save rendered video since the beginning or the last reset(), whichever is the latest. Only useful if render_mode is ‘saved’.

Parameters:
pathPath

Path to which the video should be saved.

stabilization_timefloat, optional

Time (in seconds) to wait before starting to render the video. This might be wanted because it takes a few frames for the position controller to move the joints to the specified angles from the default, all-stretched position. By default 0.02s

set_slope(slope: float, rot_axis='y')#

Set the slope of the environment and modify the camera orientation so that gravity is always pointing down. Changing the gravity vector might be useful during climbing simulations. The change in the camera angle has been extensively tested for the simple cameras (left, right, top, bottom, front, back) but not for the composed ones.

Parameters:
slopefloat

The desired_slope of the environment in degrees.

rot_axisstr, optional

The axis about which the slope is applied, by default “y”.

spec: EnvSpec | None = None#
step(action: ObsType) Tuple[ObsType, float, bool, bool, Dict[str, Any]]#

Step the Gym environment.

Parameters:
actionObsType

Action dictionary as defined by the environment’s action space.

Returns:
ObsType

The observation as defined by the environment.

float

The reward as defined by the environment.

bool

Whether the episode has terminated due to factors that are defined within the Markov Decision Process (eg. task completion/failure, etc).

bool

Whether the episode has terminated due to factors beyond the Markov Decision Process (eg. time limit, etc).

Dict[str, Any]

Any additional information that is not part of the observation. This is an empty dictionary by default (except when vision is enabled; in this case a “vison_updated” boolean variable indicates whether the visual input to the fly was refreshed at this step) but the user can override this method to return additional information.

property unwrapped: Env[ObsType, ActType]#

Returns the base non-wrapped environment.

Returns:

Env: The base non-wrapped gymnasium.Env instance

property vision_update_mask: ndarray#

The refresh frequency of the visual system is often loser than the same as the physics simulation time step. This 1D mask, whose size is the same as the number of simulation time steps, indicates in which time steps the visual inputs have been refreshed. In other words, the visual input frames where this mask is False are repititions of the previous updated visual input frames.