nqos_split Adapter#
#
- class network_gym_client.envs.nqos_split.Adapter(config_json)[source]#
nqos_split env adapter.
- Parameters:
Adapter (network_gym_client.adapter.Adapter) – base class.
Initialize the adapter.
- Parameters:
config_json (json) – the configuration file
Methods#
- network_gym_client.envs.nqos_split.Adapter.get_observation(self, df)#
Prepare observation for nqos_split env.
This function should return the same number of features defined in the
get_observation_space()
.- Parameters:
df (pd.DataFrame) – network stats measurement
- Returns:
spaces – observation spaces
- network_gym_client.envs.nqos_split.Adapter.get_reward(self, df)#
Prepare reward for the nqos_split env.
- Parameters:
df (pd.DataFrame) – network stats
- Returns:
spaces – reward spaces
- network_gym_client.envs.nqos_split.Adapter.get_policy(self, action)#
Prepare policy for the nqos_split env.
- Parameters:
action (spaces) – action from the RL agent
- Returns:
json – network policy
Reward Method#
- network_gym_client.envs.nqos_split.Adapter.netowrk_util(self, throughput, delay, alpha=0.5)#
Calculates a network utility function based on throughput and delay, with a specified alpha value for balancing. Default Reward function.
- Parameters:
throughput – a float representing the network throughput in bits per second
delay – a float representing the network delay in seconds
alpha – a float representing the alpha value for balancing (default is 0.5)
- Returns:
a float representing the alpha-balanced metric
Additional Methods#
- network_gym_client.envs.nqos_split.Adapter.get_action_space(self)#
Get action space for the nqos_split env.
- Returns:
spaces – action spaces
- network_gym_client.envs.nqos_split.Adapter.get_observation_space(self)#
Get the observation space for nqos_split env.
- Returns:
spaces – observation spaces