![](http://i1.daumcdn.net/thumb/C148x148.fwebp.q85/?fname=https://blog.kakaocdn.net/dn/At4qb/btsDRpGppLr/KM702UvqUR1zXn439KuJZ1/img.png)
http://arxiv.org/abs/2106.09685 LoRA: Low-Rank Adaptation of Large Language Models An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes le arxiv.org 소개 (Introduction) "LoRA"는 마이크로소프트에서 출시된 언어 모델로, ..
논문
2024. 1. 22. 15:21
반응형
공지사항
최근에 올라온 글
최근에 달린 댓글
- Total
- Today
- Yesterday
링크
TAG
- LLM
- writerow
- Claude
- ChatGPT
- PEFT
- python'
- python
- vscode
- TextRank
- geospy
- polars
- rdflib
- 키워드추출
- hadoop
- 지식그래프임베딩
- PostgreSQL
- pdfmathtranslate
- 지식그래프
- rdffox
- psycopg
- vervel
- knowledgegraph
- deepseek
- pandas
- Encoding
- Vue3
- SPARQL
- cursorai
- MongoDB
- Postgis
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 19 | 20 | 21 | 22 |
23 | 24 | 25 | 26 | 27 | 28 |
글 보관함