|
|
--- |
|
|
{} |
|
|
--- |
|
|
# Materials for the paper "Bridging Logic and Learning: Decoding Temporal Logic Embeddings via Transformers" (Candussio et al.) @ ECML-PKDD 2025 |
|
|
|
|
|
**TL;DR:** |
|
|
- (trained) models are available at: https://huggingface.co/collections/saracandu/stldec-ecml-pkdd-2025-686fe174a16915bc32aa53eb |
|
|
- code, results, and other details can be found in this repo. |
|
|
|
|
|
The goal of STLdecoder is to take a NeSy embedding of a Signal Temporal Logic (STL) formula and recover a semantically equivalent formula. |
|
|
|
|
|
The `encoder.py` file allows you to obtain the NeSy embeddings of (a list of) formulae with respect to a predefined anchor set, which you can find in the `anchor_sets/` folder. More details on this procedure can be found at https://ebooks.iospress.nl/doi/10.3233/FAIA240638 |
|
|
This class also relies on the following files: `phis_generator.py`, `traj_measure.py`, `kernel.py`, `stl.py`, `anchor_set_generation.py`, `custom_typing.py`, `trajectories.py`. |
|
|
|
|
|
The `decoder.py` component aims at translating a vector (i.e., the encoding of a formula, as done by `encoder.py`) into a string (i.e., an STL formula consisting of a hybrid syntax made of numbers, parentheses, and words, whose vocabulary can be found in the `tokenizer_files/` folder). |
|
|
This is practically implemented in the `modeling_stldec.py` file, as we perform the aforementioned procedure using a decoder-only Transformer architecture. This process requires autoregressively generating the tokens of the STL formula and embedding them in order to merge this information with the initial semantic vector through the cross-attention block. The `configuration.py` file serves as a crystallized structure guiding the `transformers` classes. |
|
|
|
|
|
In order to train this architecture, we can use the `training.py` file, leveraging the different training settings available in the `training_config/` folder. |
|
|
|
|
|
|