sinergym.utils.wrappers.MultiObsWrapper
- class sinergym.utils.wrappers.MultiObsWrapper(env: Any, n: int = 5, flatten: bool = True)
- __init__(env: Any, n: int = 5, flatten: bool = True) None
Stack of observations.
- Parameters
env (Any) – Original Gym environment.
n (int, optional) – Number of observations to be stacked. Defaults to 5.
flatten (bool, optional) – Whether or not flat the observation vector. Defaults to True.
Methods
__init__
(env[, n, flatten])Stack of observations.
class_name
()close
()Override close in your subclass to perform any necessary cleanup.
compute_reward
(achieved_goal, desired_goal, info)render
([mode])Renders the environment.
reset
()Resets the environment.
seed
([seed])Sets the seed for this env's random number generator(s).
step
(action)Performs the action in the new environment.
Attributes
action_space
metadata
observation_space
reward_range
spec
unwrapped
Completely unwrap this env.
- reset() numpy.ndarray
Resets the environment.
- Returns
Stacked previous observations.
- Return type
np.ndarray
- step(action: Union[int, numpy.ndarray]) Tuple[numpy.ndarray, float, bool, Dict[str, Any]]
Performs the action in the new environment.
- Parameters
action (Union[int, np.ndarray]) – Action to be executed in environment.
- Returns
Tuple with next observation, reward, bool for terminated episode and dict with extra information.
- Return type
Tuple[np.ndarray, float, bool, Dict[str, Any]]