Unleashing the True Potential of Sequence-to-Sequence Models for Sequence Tagging and Structure Parsing
AbstractSequence-to-Sequence (S2S) models have achieved remarkable success FAST JOINT CARE + on various text generation tasks.However, learning complex structures with S2S models remains challenging as external neural modules and additional lexicons are often supplemented to predict non-textual outputs.We present a systematic study of S2S modeling