Toxicbias-reasoning: a multicultural dataset for social bias detection with human-aligned reasoning

Social bias in language models continues to create fairness risks in multilingual and multicultural environments. Existing datasets provide limited cultural diversity, insufficient support for overlapping bias categories, and minimal availability of human-interpretable reasoning, which reduces transparency and reliability in the bias detection. The ToxicBias-Reasoning dataset addresses these gaps by providing 7,562 annotated statements representing bias related to caste, religion, race, gender,