Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"
-
Updated
Mar 20, 2025 - Jupyter Notebook
Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"
EthosGPT is an open-source framework that maps how Large Language Models align with diverse human values, promoting cultural and ethical diversity in AI-driven decision-making.
PRISM: A Multi-Perspective AI Alignment Framework for Ethical AI (Demo: https://app.prismframework.ai | Paper: https://arxiv.org/abs/2503.04740)
A comprehensive toolkit for implementing, analyzing, and validating AI value alignment based on Anthropic's 'Values in the Wild' research.
A data-driven framework mapping daily activities to multi-horizon goals, exploring time-to-value realization beyond traditional 80/20 optimization
Add a description, image, and links to the value-alignment topic page so that developers can more easily learn about it.
To associate your repository with the value-alignment topic, visit your repo's landing page and select "manage topics."