LLMs and Transformers TaperNorm Gated removal of normalization in Transformers for stable training and inference-time folding. Parameter-efficient Adaptation of Tokenizer-free Byte Latent Transformer Can a tokenizer-free architecture be adapted to new languages by retraining only the ~4% “interface” modules and keeping the core transformer fixed? Optimization and Control Safe Explicit Model Predictive Control via Constrained Neural Network Training Fast MPC policies learned offline with safety constraints using a primal-dual (augmented Lagrangian) training loop. WNNM Image Denoising Python implementation of weighted nuclear norm minimization (WNNM) and comparison against Gaussian, bilateral, and NLM baselines. Semiconductor ML Invariance Constraints for Computational Lithography Injecting expert invariances into deep models (CNNs) with exact-fit constraints on critical samples. Inverse Modeling for Metrology Faster, more accurate inverse modeling for semiconductor metrology.