sinergym.envs.eplus_env.EplusEnv

class sinergym.envs.eplus_env.EplusEnv(idf_file: str, weather_file: str, variables_file: str, spaces_file: str, env_name: str = 'eplus-env-v1', discrete_actions: bool = True, weather_variability: typing.Optional[typing.Tuple[float]] = None, reward: typing.Any = <sinergym.utils.rewards.LinearReward object>, config_params: typing.Optional[typing.Dict[str, typing.Any]] = None)

Environment with EnergyPlus simulator.

__init__(idf_file: str, weather_file: str, variables_file: str, spaces_file: str, env_name: str = 'eplus-env-v1', discrete_actions: bool = True, weather_variability: typing.Optional[typing.Tuple[float]] = None, reward: typing.Any = <sinergym.utils.rewards.LinearReward object>, config_params: typing.Optional[typing.Dict[str, typing.Any]] = None)

Environment with EnergyPlus simulator.

Parameters
  • idf_file (str) – Name of the IDF file with the building definition.

  • weather_file (str) – Name of the EPW file for weather conditions.

  • variables_file (str) – Variables defined in environment to be observation and action (see sinergym/data/variables/ for examples).

  • spaces_file (str) – Action and observation space defined in a xml (see sinergym/data/variables/ for examples).

  • env_name (str, optional) – Env name used for working directory generation. Defaults to ‘eplus-env-v1’.

  • discrete_actions (bool, optional) – Whether the actions are discrete (True) or continuous (False). Defaults to True.

  • weather_variability (Optional[Tuple[float]], optional) – Tuple with sigma, mu and tao of the Ornstein-Uhlenbeck process to be applied to weather data. Defaults to None.

  • reward (Any, optional) – Reward function instance used for agent feedback. Defaults to LinearReward().

  • config_params (Optional[Dict[str, Any]], optional) – Dictionary with all extra configuration for simulator. Defaults to None.

Methods

__init__(idf_file, weather_file, ...[, ...])

Environment with EnergyPlus simulator.

close()

End simulation.

render([mode])

Environment rendering.

reset()

Reset the environment.

seed([seed])

Sets the seed for this env's random number generator(s).

step(action)

Sends action to the environment

Attributes

action_space

metadata

observation_space

reward_range

spec

unwrapped

Completely unwrap this env.

close() None

End simulation.

metadata = {'render.modes': ['human']}
render(mode: str = 'human') None

Environment rendering.

Parameters

mode (str, optional) – Mode for rendering. Defaults to ‘human’.

reset() numpy.ndarray

Reset the environment.

Returns

Current observation.

Return type

np.ndarray

step(action: Union[int, float, numpy.integer, numpy.ndarray, List[Any], Tuple[Any]]) Tuple[numpy.ndarray, float, bool, Dict[str, Any]]

Sends action to the environment

Parameters

action (Union[int, float, np.integer, np.ndarray, List[Any], Tuple[Any]]) – Action selected by the agent.

Returns

Observation for next timestep, reward obtained, Whether the episode has ended or not and a dictionary with extra information

Return type

Tuple[np.ndarray, float, bool, Dict[str, Any]]