Sakana AI is introducing Doc-to-LoRA and Text-to-LoRA, tools that try to solve the context-length problem by turning documents and prompts into tiny fine-tuning adapters. Instead of pasting a long PDF or mega-prompt into every chat, users can compress that information into a LoRA module and attach it to a base model.
The approach effectively gives language models a form of “persistent memory” that survives across sessions and goes beyond usual context window limits. It also works for pure text models, without needing any vision component.
With code available on GitHub, developers can start experimenting with storing manuals, style guides, or knowledge bases as LoRA weights instead of plain text.
Comments
No comments yet. Be the first to share your thoughts!