Env#

network_gym_client.Env#

class network_gym_client.Env(id, config_json)[source]#

Custom NetworkGym Environment that follows gym interface.

Initilize Env.

Parameters:
  • id (int) – the client ID number

  • config_json (json) – configuration file

Methods#

network_gym_client.Env.reset(self, seed=None, options=None)#

Resets the environment to an initial internal state, returning an initial observation and info.

Parameters:
  • seed (optional int) – The seed that is used to initialize the environment’s PRNG (np_random). If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

  • options (optional dict) – Additional information to specify how the environment is reset (optional, depending on the specific environment)

Returns:
  • observation (ObsType) – Observation of the initial state.

  • info (dictionary) – This dictionary contains auxiliary information complementing observation. It should be analogous to the info returned by step().

network_gym_client.Env.step(self, action)#

Run one timestep of the environment’s dynamics using the agent actions.

Get action lists from RL agent and send to network gym server Get measurements from gamsim and obs and reward Check if it is the last step in the episode Return obs,reward,done,info

Parameters:

action (ActType) – an action provided by the agent to update the environment state.

Returns:
  • observation (ObsType) – An element of the environment’s observation_space as the next observation due to the agent actions.

  • reward (SupportsFloat) – The reward as a result of taking the action.

  • terminated (bool) – Whether the agent reaches the terminal state (as defined under the MDP of the task) which can be positive or negative. An example is reaching the goal state or moving into the lava from the Sutton and Barton, Gridworld. If true, the user needs to call reset().

  • truncated (bool) – Whether the truncation condition outside the scope of the MDP is satisfied. Typically, this is a timelimit, but could also be used to indicate an agent physically going out of bounds. Can be used to end the episode prematurely before a terminal state is reached. If true, the user needs to call reset().

  • info (dict) – Contains auxiliary diagnostic information (helpful for debugging, learning, and logging). {one-way delay, raw observation, and termination flag}

  • done (bool) – (Deprecated) A boolean value for if the episode has ended, in which case further step() calls will return undefined results. This was removed in OpenAI Gym v26 in favor of terminated and truncated attributes. A done signal may be emitted for different reasons: Maybe the task underlying the environment was solved successfully, a certain timelimit was exceeded, or the physics simulation has entered an invalid state.

Attributes#

Env.action_space: spaces.Space[ActType]#

The Space object corresponding to valid actions, all valid actions should be contained with the space. For example, if the action space is of type Discrete and gives the value Discrete(2), this means there are two valid discrete actions: 0 & 1.

>>> env.action_space
Discrete(2)
>>> env.observation_space
Box(-3.4028234663852886e+38, 3.4028234663852886e+38, (4,), float32)
Env.observation_space: spaces.Space[ObsType]#

The Space object corresponding to valid observations, all valid observations should be contained with the space. For example, if the observation space is of type Box and the shape of the object is (4,), this denotes a valid observation will be an array of 4 numbers. We can check the box bounds as well with attributes.

>>> env.observation_space.high
array([4.8000002e+00, 3.4028235e+38, 4.1887903e-01, 3.4028235e+38], dtype=float32)
>>> env.observation_space.low
array([-4.8000002e+00, -3.4028235e+38, -4.1887903e-01, -3.4028235e+38], dtype=float32)
Env.northbound_interface_client#

The Northbound Interface Client object connects and communicates with the server.

Env.adapter#

The Environment Adapter object traslate the dataformat between the gymnasium and network_gym.