Misty-GRED: Generative Robot Emotional Displays
This is a GPT-2-medium-based causal language model fine-tuned to generate robot behaviors given an emotion label. It is trained on emotion-behavior data collected from the Misty robot and produces textual descriptions of Misty's actions that align with the given emotion.
Development code: https://github.com/bsu-slim/emro-gred-misty
Model Details
Base Model: GPT-2 Medium
Model type: Causal Language Model
Inputs: Prompt of the form Emotion: <emotion> Behaviors:
Outputs: A text sequence representing Misty robot behaviors
Emotion Labels
This model was trained on 6 grouped emotions:
anger_frustrationinterest_desireconfusion_sorrow_boredomjoy_hopeunderstanding_gratitude_reliefdisgust_surprise_alarm_fear
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "bsu-slim/gred-misty"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # use_fast=False recommended
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
emotion = "joy_hope"
prompt = f"<|startoftext|>Emotion: {emotion} <|endoftext|> Behaviors:"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
# Generate behaviors
output = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
top_k=50,
top_p=0.95,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(f"Input Emotion: {emotion}")
print("Generated Behavior:")
print(generated_text.split("Behaviors:")[1].strip())
Example Output
Input Emotion: joy_hope
Generated Behavior:
drive_track_36_0_1 display_face_resources_misty_faces_black_7_1 say_text_wahoo! move_arm_both_85_80 move_arm_both_0_80
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support