Metaphor identification as token classification
Modern NLP systems frame metaphor identification as binary classification over tokens, typically with contextual embeddings.
The dominant computational framing: each token gets a label {metaphor, literal}. Features have evolved from hand-crafted lexical resources (concreteness norms, WordNet supersenses) to contextual embeddings (BERT, RoBERTa, recent decoder-only models).
Datasets are typically MIPVU-annotated, which means the labels follow the procedure in mipvu-procedure rather than the conceptual mappings of conceptual-metaphor-theory. The mismatch matters: a model trained on MIPVU labels learns lexical-contrast features, not source-domain mappings, and tends to fail on novel metaphors that don’t follow the patterns in the training data.