You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yes, DeepSpeed supports fine-tuning extra-large models like LoRA efficiently. DeepSpeed provides various optimizations for training and inference, including memory optimization, distributed training, and mixed precision training, which can be beneficial when fine-tuning large models like LoRA. You can leverage DeepSpeed's capabilities to accelerate the fine-tuning process and manage memory effectively.
Deepspeed support finetune extra model with lora ?
The text was updated successfully, but these errors were encountered: