본문 바로가기

News/논문

PaLM, Flan-T5, LoRA, LLaMA, Alpaca

PaLM: Scaling Language Modeling with Pathways (1)

https://velog.io/@tobigs-nlp/PaLM-Scaling-Language-Modeling-with-Pathways-1

 

 

Meta AI에서 개발한 ChatGPT의 대항마, LLaMA

https://devocean.sk.com/blog/techBoardDetail.do?ID=164601

 

 

거대 언어 모델들의 프롬프트 데이터

https://ncsoft.github.io/ncresearch/e36b37cd7298f4ed2458cbea6029922c13761a63

 

 

[리뷰] Meta AI의 Small Gaint Model: LLaMA(Large Language Model Meta AI)

https://moon-walker.medium.com/%EB%A6%AC%EB%B7%B0-meta-ai%EC%9D%98-small-gaint-model-llama-large-language-model-meta-ai-334e349ed06f

 

 

[리뷰] Meta LLaMA의 친척 — Stanford Univ의 Alpaca

https://moon-walker.medium.com/%EB%A6%AC%EB%B7%B0-meta-llama%EC%9D%98-%EC%B9%9C%EC%B2%99-stanford-univ%EC%9D%98-alpaca-ec82d432dc25

 

 

[LLaMA 관련 논문 리뷰] 01-FINETUNED LANGUAGE MODELS ARE ZERO-SHOT LEARNERS (Instruction Tuning)

https://velog.io/@heomollang/LLaMA-%EB%85%BC%EB%AC%B8-%EB%A6%AC%EB%B7%B0-1-LLaMA-Open-and-Efficient-Foundation-Language-Models

 

 

[챗GPT 러닝데이 | 챗GPT말고 LLM] LoRA로 빠르게 나만의 모델을 만들어보자 - 김용담
 https://www.youtube.com/live/66GD0Bj5Whk?feature=shared 

 

 

[LLM세미나] Self-Instruct | 스스로 개선하는 인공지능
 https://youtu.be/3a8_5YOS6hw?feature=shared

 

 

[인공지능,머신러닝,딥러닝] (심화) LLaMA: Open and Efficient Foundation Language Models
 https://youtu.be/cbLuB-b5em0?feature=shared 

 

 

[Paper Review] Llama 2: Open Foundation and Fine-Tuned Chat Models
 https://youtu.be/crYmS_Q4eGw?feature=shared 

 

 

[챗GPT 러닝데이 | 한국어 LLM 민주화의 시작 KoAlpaca!
 https://www.youtube.com/watch?v=vzbGNxzYW0A&ab_channel=%EC%9D%B8%EA%B3%B5%EC%A7%80%EB%8A%A5%ED%8C%A9%ED%86%A0%EB%A6%AC%28AIFactory%29

 

 

[Paper Review] Open Source LMs
 https://youtu.be/TLisXrictso?feature=shared 

 

 

[Paper Review] Instruction Tuning with GPT-4
 https://youtu.be/erXT4MlCZjs?feature=shared

 

 

 










>