LIVE
Loading prices...
View All

Doc-to-LoRA and Text-to-LoRA give LLMs “persistent memory”

Sakana AI interface showing the process of converting a document into a LoRA model for fine-tuning.

Sakana AI is introducing Doc-to-LoRA and Text-to-LoRA, tools that try to solve the context-length problem by turning documents and prompts into tiny fine-tuning adapters. Instead of pasting a long PDF or mega-prompt into every chat, users can compress that information into a LoRA module and attach it to a base model.

The approach effectively gives language models a form of “persistent memory” that survives across sessions and goes beyond usual context window limits. It also works for pure text models, without needing any vision component.

With code available on GitHub, developers can start experimenting with storing manuals, style guides, or knowledge bases as LoRA weights instead of plain text.

Communication graduate, closet cynic, and kid at heart. Duane is a rare person to find, quite literally. He often takes to himself but has proven his mettle in tech media with his quick wits. Well, the portfolio of scriptwriting, web content, and public relations help too, we suppose. As a homebody, he often spends his time on the streaming platform Twitch or ‘farming’ gaming clips with friends. He is also an avid fan of round glasses and anything relative to blueberries.

189 posts

Comments

Your contact info is private.

No comments yet. Be the first to share your thoughts!