This website uses cookies to ensure you get the best experience on our website.
Learn more
Got it!
Donato Crisostomi
Donato Crisostomi
Home
News
Blog
3
Implicit Inversion turns CLIP into a Decoder
CLIP is a discriminative model trained to align images and text in a shared embedding space. Due to its multimodal structure, it serves …
Antonio D'Orazio
,
Maria Rosaria Briglia
,
Donato Crisostomi
,
Dario Loi
,
Emanuele Rodolà
,
Iacopo Masi
Cite
arXiv
GitHub
Mergenetic: a Simple Evolutionary Model Merging Library
Model merging allows combining the capabilities of existing models into a new one - post hoc, without additional training. This has …
Adrian R. Minut
,
Tommaso Mencattini
,
Marco Santilli
,
Donato Crisostomi
,
Emanuele Rodolà
Cite
arXiv
GitHub
Efficient Generation of Multimodal Fluid Simulation Data
In this work, we introduce an efficient generation procedure to produce synthetic multi-modal datasets of fluid simulations. The …
Daniele Baieri
,
Donato Crisostomi
,
Donato Crisostomi
,
Stefano Esposito
,
Filippo Maggioli
,
Emanuele Rodolà
Cite
arXiv
STAGE: Stemmed Accompaniment Generation through Prefix-Based Conditioning
Recent advances in generative models have made it possible to create high-quality, coherent music, with some systems delivering …
Giorgio Strano
,
Chiara Ballanti
,
Donato Crisostomi
,
Michele Mancusi
,
Luca Cosmo
,
Emanuele Rodolà
Cite
arXiv
GitHub
Activation Patching for Interpretable Steering in Music Generation
Understanding how large audio models represent music, and using that understanding to steer generation, is both challenging and …
Simone Facchiano
,
Giorgio Strano
,
Donato Crisostomi
,
Irene Tallini
,
Tommaso Mencattini
,
Fabio Galasso
,
Emanuele Rodolà
Cite
arXiv
LoopGen: Training-Free Loopable Music Generation
Loops–short audio segments designed for seamless repetition–are central to many music genres, particularly those rooted in …
Davide Marincione
,
Giorgio Strano
,
Donato Crisostomi
,
Roberto Ribuoli
,
Emanuele Rodolà
Cite
arXiv
MASS: MoErging through Adaptive Subspace Selection
Model merging has recently emerged as a lightweight alternative to ensembling, combining multiple fine-tuned models into a single set …
Donato Crisostomi
,
Alessandro Zirilli
,
Antonio Andrea Gargiulo
,
Maria Sofia Bucarelli
,
Simone Scardapane
,
Fabrizio Silvestri
,
Iacopo Masi
,
Emanuele Rodolà
Cite
arXiv
GitHub
Humanity's Last Exam
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are …
More than 600 authors including
,
Donato Crisostomi
,
Emanuele Rodolà
Cite
URL
GitHub
arXiv
ATM: Improving Model Merging by Alternating Tuning and Merging
Model merging has recently emerged as a cost-efficient paradigm for multi-task learning. Among current approaches, task arithmetic …
Luca Zhou
,
Daniele Solombrino
,
Donato Crisostomi
,
Maria Sofia Bucarelli
,
Fabrizio Silvestri
,
Emanuele Rodolà
Cite
arXiv
GitHub
Preface of UniReps: the Second Edition of the Workshop on Unifying Representations in Neural Models
Discover why, when and how distinct learning processes yield similar representations, and the degree to which these can be unified.
Clementine Domine
,
Marco Fumero
,
Zorah Lähner
,
Donato Crisostomi
,
Luca Moschella
,
Kimberly Stachenfeld
Cite
Article
Cite
×