EdgeRIC main paper
This paper provides real-time MAC scheduler that assigns scheduling decisions per UE for each TTI. We implement multiple
scheduling algorithms with this μApp. The scheduling algorithms considered in this work to demonstrate system
behavior are: Max Weight
, Max CQI
, Proportional Fairness
, Round Robin
, and A custom AI based scheduler using an off-the-shelf reinforcement learning policy
MAC scheduling
This μApp offers scheduling decisions to the RAN at the granularity of one TTI (~1ms). It adopts a weight based approach for its decision. The weight of an UE corresponds to its relative priority to be scheduled given the current state of the system. We list the metrics used by this μApp below:
Metrics: ue_data[rnti]['Tx'], ue_data[rnti]['CQI'], ue_data[rnti]['BL']
for each UE.
Actions Sent: weight_{i}
which corresponds to the weight of each UE i
RT-E2 Policy Format for this μApp: RNTI_{i}, weight{i}
where i
corresponds to each UE.
Training Reinforcement Learning Policy
Here, we train an RL agent with the objective of total system throughput maximization. Listed below are the specifications for our training:
- Algorithm used: Proximal Policy Optimization
- State_space: [BL1,CQI1,BL2,CQI2.....]
- Action_space: ``[Weight1,Weight2.....]
- Reward: Total system throughput
Install srsRAN supporting EdgeRIC messages and control
Install zmq
git clone https://github.com/zeromq/libzmq.git
cd libzmq
./autogen.sh
./configure
make
sudo make install
sudo ldconfig
git clone https://github.com/zeromq/czmq.git
cd czmq
./autogen.sh
./configure
make
sudo make install
sudo ldconfig
Dependencies and cloning the repository
sudo apt-get install cmake make gcc g++ pkg-config libfftw3-dev libmbedtls-dev libsctp-dev libyaml-cpp-dev libgtest-dev # install dependencies
git clone https://github.com/ushasigh/EdgeRIC-A-real-time-RIC.git
cd srsRAN-4G-new-ER-design
Build the repository
./make-ran.sh
All config files are provided in srsRAN_4G-ER/.config
How to run the network
Setup the core and srsenb
Terminal 1: Run the GRC broker, depending on the number of UEs {i}
./top_block_{i}ue_23.04MHz.py
This step is not needed in over the air mode
Terminal 2: Run the EPC update the hss section in the ./config/epc.conf to your file directory
./run_epc.sh
Terminal 3: Run the enb
update ./config/enb.conf:
the [rf] section for zmq/ uhd configurations
the [enb_files] section: upadte the file directories
./run_enb.sh
Run the UEs
Run UE1:
cd srsRAN_4G-ER/build
sudo ./srsue/src/srsue ../.config/ue1.conf --rf.device_name=zmq --rf.device_args="tx_port=tcp://*:2001,rx_port=tcp://localhost:2000,id=ue,base_srate=23.04e6" --gw.netns=ue1
Run UE2:
cd srsRAN_4G-ER/build
sudo ./srsue/src/srsue ../.config/ue2.conf --rf.device_name=zmq --rf.device_args="tx_port=tcp://*:2011,rx_port=tcp://localhost:2010,id=ue,base_srate=23.04e6" --gw.netns=ue2
To run automated scripts for {i} UEs:
./run_srsran_{i}ue.sh # script is included for 2/ 4 UEs
Running UEs with CQI trace files
./run_{i}ue_cqi_emul.sh # script is included for 2 UEs
You can run the UEs in separate terminals, use the commands provided in the .sh script.
Press t
to view the UE metrics on console
Updating the CQI channel trace: file under concern: srsRAN_4G-er-ue1/lib/src/phy/ue/ue_dl.c
, update line 155 with the desired CQI file which should be present in folder srsRAN_4G-er-ue1/cqis
. Repeat the same for each UE codebase.
Stream Traffic:
cd traffic-generator
Running Downlink iperf traffic
Terminal 1:
./iperf_server_{i}ues.sh
Terminal 2:
./iperf_client_{i}ues.sh <rate_ue{i}> <duration>, eg: ./iperf_client_2ues.sh 10M 10M 1000
Running Uplink iperf traffic
Terminal 1:
./iperf_server_{i}ues_ul.sh
Terminal 2:
./iperf_client_{i}ues_ul.sh <rate_ue{i}> <duration>, eg: ./iperf_client_2ues.sh 10M 10M 1000
Running various other kinds of Downlink Traffic profiles
Start the servers
sudo ip netns exec ue1 python3 server.py --host 172.16.0.2 --port 12345
sudo ip netns exec ue2 python3 server.py --host 172.16.0.3 --port 12345
sudo ip netns exec ue3 python3 server.py --host 172.16.0.4 --port 12345
sudo ip netns exec ue4 python3 server.py --host 172.16.0.5 --port 12345
Start the clients
sudo python3 client_embb.py --host 172.16.0.2 --port 12345 #--burst 250000 --min-interval 0.5 --max-interval 2.0
sudo python3 client_urllc.py --host 172.16.0.3 --port 12345
sudo python3 client_xr.py --host 172.16.0.4 --port 12345
sudo python3 client_mmtc.py --host 172.16.0.5 --port 12345
Running EdgeRIC for downlink scheduling control
cd edgeric
EdgeRIC messenger
edgeric_messenger
├── get_metrics_multi() # get_metrics(): receive metrics from RAN, called by all μApps
│ ├── returns ue_data dictionary
├── send_scheduling_weight() # send the RT-E2 scheduling policy message to RAN
μApps supported in this codebase
├── /muApp1 # weight based abstraction of downlink scheduling control
│ ├── muApp1_run_DL_scheduling.py
├── /muApp2 # training an RL agent to compute downlink scheduling policy
├── muApp2_train_RL_DL_scheduling.py
Running muApp1 - downlink scheduler
cd muApp1
sudo python3 muApp1_run_DL_scheduling.py
Setting the scheduler algorithm manually
Set the scheduling algorithm you want to run:
# Line 259
selected_algorithm = "Max CQI" # selection can be: Max CQI, Max Weight,
# Proportional Fair (PF), Round Robin - to be implemneted
# RL - models are included for 2 UEs
If the algorithm selected is RL, set the directory for the RL model
# Line 270
rl_model_name = "Fully Trained Model" # selection can be Initial Model,
# Half Trained Model, Fully Trained Model - to see benefits, run UE1 with load 5Mbps, UE2 with 21Mbps
The respective models are saved in:
├── ../rl_model/
├── initial_model
├──model_demo.pt
├── half_trained_model
├──model_demo.pt
├── fully_trained_model
├──model_demo.pt
Running muApp2 - Training an RL policy for scheduling
We are training a PPO agent with the objective of throughput maximization in this particular study.
Usage
cd muApp2
sudo python3 muApp2_train_RL_DL_scheduling.py --config-name=edge_ric
muApp2_train_RL_DL_scheduling.py
Trains PPO agent for
num_iters
number of iterationsOne iteration consists of training on 2048 samples and evaluating for 2048 timesteps
The evaluation metric (avg reward per episode) is plotted as the training grap
Repo Structure
├── conf
│ ├── edge_ric.yaml # Config file for edgeric RL training
│ ├── example.yaml
│ ├── simpler_streaming.yaml
│ └── single_agent.yaml
├── outputs # Output logs of each training sorted chronologically
│ ├── 2022-10-07
├── model_best.pt # Saved policy neural network weights
│ .
│ .
│ .
│
└── ../stream_rl # Name of the python package implementing the simulator mechanisms
├── callbacks.py
├── envs # All the envs
│ ├── cqi_traces
│ │ ├── data.csv # CQI trace to be used by simulation env
│ │ └── trace_generator.py # Code to generate synthetic CQI traces
│ ├── edge_ric.py # Our Env
│ ├── simpler_streaming_env.py
│ ├── single_agent_env.py
│ └── streaming_env.py
│ └── __init__.py
├── __init__.py
├── plots.py # All plotting code
├── policy_net # Custom policy net architectures (not currently used)
│ ├── conv_policy.py
│ ├── __init__.py
├── registry # Registry system for registering envs and rewards (to keep things modular)
│ └── __init__.py
└── rewards.py # Definition of reward functions to be used in envs
Once the training completes: take the model_best.pt and save in the ../rl_model folder
EdgeRIC Env (edge_ric.py)
CQI1 BL1
┌────────────┬─┬─┬─┬─┐
Bernoulli ───► │ │ │ │ │ │ ──► f(CQI1,allocated_RGB1)
│ │ │ │ │ │
└────────────┴─┴─┴─┴─┘
CQI2 BL2
┌────────────────┬─┬─┐
Bernoulli ───► │ │ │ │ ──► f(CQI2,allocated_RGB2)
│ │ │ │
└────────────────┴─┴─┘
.
.
.
. num_UEs
.
.
.
┌────────────┬─┬─┬─┬─┐
Bernoulli ───► │ │ │ │ │ │ ──► f(CQI_N,allocated_RBG_N)
│ │ │ │ │ │
└────────────┴─┴─┴─┴─┘
State_space :
[BL1,CQI1,BL2,CQI2.....]
(if augmented_state_space=False)Action_space :
[Weight1,Weight2.....]
Parameters of the env configurable in
"./conf/edge_ric.yml"
, underenv_config
field