This section covers chemical language models: architectures that learn molecular representations directly from chemical string notations (SMILES, SELFIES, InChI). Notes here include encoder-only transformers like ChemBERTa and MoLFormer for property prediction, sequence-to-sequence models like Chemformer for reaction prediction and GP-MoLFormer for molecular generation, and translation models like STOUT for SMILES-to-IUPAC conversion. For multimodal and reasoning LLMs applied to chemistry, see LLMs for Chemistry.