Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: OverflowError Message: Python int too large to convert to C long Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3357, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 81, in _infer_features_from_batch pa_table = pa.Table.from_pydict(batch) File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict File "pyarrow/table.pxi", line 5347, in pyarrow.lib._from_pydict File "pyarrow/array.pxi", line 373, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 343, in pyarrow.lib.array File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 88, in pyarrow.lib.check_status OverflowError: Python int too large to convert to C long
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
RAMPART-FL: A Dataset for Offline Reinforcement Learning in Federated Participant Selection
Dataset Summary
This repository contains a dataset package generated by the RAMPART-FL framework, a system for researching Reinforcement Learning (RL) based participant selection in Federated Learning (FL) for intrusion detection. The data originates from a 400-round, 25-client simulation where an RL agent used a Multi-Criteria strategy to select clients.
This package provides two distinct files to serve different research needs:
rampart_fl_400r_25c_multicriteria_event_log.csv
: A detailed, event-driven log that is ideal for in-depth analysis, custom feature engineering, and understanding the step-by-step behavior of the FL process.rampart_fl_400r_25c_multicriteria_rl_transitions.csv
: A pre-processed, analysis-ready dataset of(State, Action, Reward, Next_State)
tuples that has been enriched with contextual client and reward information.
The scripts used to generate these files from the raw experimental output are available in the associated GitHub repository.
Associated Research
This dataset is a key contribution of a master's thesis focused on creating robust and adaptive security solutions for IoT networks using Federated Learning.
- Framework: RAMPART-FL on GitHub
File 1: The Refined Event Log (rampart_fl_400r_25c_multicriteria_event_log.csv
)
This file provides a detailed log of all relevant events that occurred during the simulation. It is best suited for researchers who need to perform detailed analysis beyond standard RL transition data.
The dataset is structured in a sparse, event-driven format. This means that for any given row, only the columns relevant to its event_type
will contain data. All other columns in that row will be empty.
Event Log Columns
Column Name | Description | Data Type |
---|---|---|
Core Identifiers | ||
elapsed_seconds_since_start |
Time in seconds since the simulation began. | Float |
server_round |
The FL training round number. | Integer |
event_type |
The type of event being logged (selection_info , learning_update , etc.). |
String |
client_cid |
The unique identifier for the client. | Integer (Large) |
State, Action & Policy (selection_info ) |
||
s_client_state_tuple |
The state vector (S) representing the client's condition. | String |
s_was_selected |
The action (A) taken by the agent (1 or 0 ). |
Integer |
s_available_cids_count |
The number of clients available for selection. | Integer |
s_client_q_value |
The Q-value of the state-action pair from the original agent. | Float |
s_client_selection_prob |
The selection probability from the original agent's policy. | Float |
Reward & Learning (learning_update ) |
||
l_state_at_selection |
The state (S) for which the reward is being applied. | String |
l_reward_for_action |
The total global reward (R) for the round's actions. | Float |
l_reward_*_component |
The different components (performance, fairness, etc.) of the total reward. | Float |
Client Evaluation Metrics (client_eval_metrics ) |
||
c_eval_partition_id |
The client's static data partition ID. | Integer |
c_eval_profile_name |
The client's hardware profile name. | String |
c_eval_cores |
Number of CPU cores available to the client. | Integer |
c_eval_f1 , _accuracy , etc. |
The client's local evaluation performance metrics. | Float |
c_eval_num_samples |
Number of samples in the client's local test set. | Integer |
Client Fit Metrics (client_fit_metrics ) |
||
c_fit_time_seconds |
Time taken for the client's local training task. | Float |
c_fit_cpu_percent |
The client's average CPU usage during training. | Float |
File 2: Enriched RL Transitions Dataset (rampart_fl_400r_25c_multicriteria_rl_transitions.csv
)
This file provides a clean, analysis-ready dataset where each row is a complete, enriched (State, Action, Reward, Next_State)
tuple. It is designed for direct use in most offline RL training libraries and workflows. Contextual columns (like c_eval_f1
) will only contain data if the action
for that row was 1
.
Transitions Dataset Columns
Column | Data Type | Description |
---|---|---|
Core Transition | ||
server_round |
Integer | The server round N in which the state was observed and the action was taken. |
client_cid |
String | The unique identifier for the client. |
state |
String | The state vector (S_t) representing the client's condition at the start of round N . |
action |
Integer | The action (A_t) taken for the client: 1 if selected, 0 otherwise. |
reward |
Float | The global reward (R_{t+1}) received after the completion of round N . |
next_state |
String | The subsequent state vector (S_{t+1}) for that same client at the start of round N+1 . |
Reward Components | ||
l_reward_performance_component |
Float | The portion of the reward attributed to the global model's performance. |
l_reward_fairness_penalty_component |
Float | The portion of the reward attributed to the fairness penalty. |
l_reward_resource_cost_component |
Float | The portion of the reward attributed to the resource cost of selected clients. |
Client Context (for the round) | ||
c_eval_profile_name |
String | The client's hardware profile name (e.g., High-End Edge CPU ). |
c_eval_cores |
Float | Number of CPU cores available to the client (Float due to possible NaNs). |
c_eval_f1 |
Float | The client's local F1-score from its evaluation in this round. |
c_eval_num_samples |
Float | Number of samples in the client's local test set. |
c_fit_time_seconds |
Float | Time taken for the client's local training in this round. |
c_fit_cpu_percent |
Float | The client's average CPU usage during training. |
- Downloads last month
- 48