LORA is the poor-mans training.
With a relatively modest dataset. You can train a LORA over an established Model (SD1.5, NAI, Anything3 or 4.5 are all popular ones) with really any kinda-goodish video card.
(It has a Low-VRAM option that makes training a bit longer but can run on daisy chained potatos it seems)
They seem to be very sensitive to 'over-training' but that's offset by applying a weight when you prompt it which is becoming the defacto standard now anyways. (Running a LORA at 1.0 generally is considered a very aggressive weight, most recommend 0.3 - 0.8)
LORA's are storming the community because they:
- Are easy and fast to train
- Small in Size
- Highly effective to influence a model with a very specific artstyle or concept
- Easy to inject (You can reference multiple LORA's in a single prompt and combine them without needing to merge, usually up to 4 before things get wonky)
This environment is constantly shifting so maybe in a month LORA's will be dead.. but they've seem to completely overtaken Hypernetworks in popularity and people are even able to extract LORA's out of checkpoints fairly effortlessly as well.