본문 바로가기

Talk/랭귀지

HanBERT/낚시성 기사 탐지 모델

Zichao Yang et al. Hierarchical Attention Networks for Document Classification. NAACL, 2016.

https://aclanthology.org/N16-1174

 

 

 

Yongjie Wang et al. On the Use of Bert for Automated Essay Scoring: Joint Learning of Multi-Scale Essay Representation. NAACL, 2022.

https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11485497

 

 

 

한상우, 온병원. 다중 계층 BERT를 활용한 낚시성 기사 탐지 모델. 한국정보기술학회, 2023.

https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11485497

 

 

 

낚시성 기사 탐지 데이터

 

AI-Hub

{     "sourceDataInfo": {         "newsID": "GB_M11_642158",         "newsCategory": "세계",         "newsSubcategory": "국제경제",         "newsTitle": "신종 코로나 악재, 중국 본토 증시 9%대 폭락",   

www.aihub.or.kr

 

 


Week 37 - NLP 모델, 낚시성 기사 방지 효과 검증돼

https://jiho-ml.com/weekly-nlp-37/

 

 

 

Week 28 - BERT만 잘 써먹어도 최고가 될 수 있다?

https://jiho-ml.com/weekly-nlp-28/

 

 

 

⚡Pytorch Lightning 으로 koBERT Fine-Tuning 해보기 (NSMC)

https://velog.io/@jaylnne/Pytorch-Lightning-%EC%9C%BC%EB%A1%9C-koBERT-Fine-Tuning-%ED%95%B4%EB%B3%B4%EA%B8%B0-NSMC

 

 

 

[코드구현] Sentence Classification - KoBERT

https://doheon.github.io/%EC%BD%94%EB%93%9C%EA%B5%AC%ED%98%84/nlp/ci-kobert-post/

 

 

 

SKTBrain/KoBERT

https://github.com/SKTBrain/KoBERT

 

 

HanBert 네이버 긍정부정 댓글 판단
https://parksrazor.tistory.com/231

 

monologg/KoBERT-Transformers
https://github.com/monologg/KoBERT-Transformers/tree/master

 

 

lingochamp/Multi-Scale-BERT-AES

https://github.com/lingochamp/Multi-Scale-BERT-AES/

 

 

 

Doheon/NewsClassification-KoBERT

https://github.com/Doheon/NewsClassification-KoBERT

 

 

 

jaylnne/nsmc-bert-pytorch_lightning

https://github.com/jaylnne/nsmc-bert-pytorch_lightning

 

 

[PLM을 이용한 한국어 혐오 표현 탐지] 1. 현황 파악
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-1.-%ED%98%84%ED%99%A9-%ED%8C%8C%EC%95%85

[PLM을 이용한 한국어 혐오 표현 탐지] 2. 베이스라인 모델 선택하기
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-2.-%EB%B2%A0%EC%9D%B4%EC%8A%A4%EB%9D%BC%EC%9D%B8-%EB%AA%A8%EB%8D%B8-%EC%84%A0%ED%83%9D%ED%95%98%EA%B8%B0

[PLM을 이용한 한국어 혐오 표현 탐지] 3. 모델 구조 바꿔보기
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-3.-%EB%AA%A8%EB%8D%B8-%EA%B5%AC%EC%A1%B0-%EB%B0%94%EA%BF%94%EB%B3%B4%EA%B8%B0

[PLM을 이용한 한국어 혐오 표현 탐지] 4. 데이터 바꿔보기
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-4.-%EB%8D%B0%EC%9D%B4%ED%84%B0-%EB%B0%94%EA%BF%94%EB%B3%B4%EA%B8%B0

[PLM을 이용한 한국어 혐오 표현 탐지] 5. 레이블 불균형 해소
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-5.-%EB%A0%88%EC%9D%B4%EB%B8%94-%EB%B6%88%EA%B7%A0%ED%98%95-%ED%95%B4%EC%86%8C

[PLM을 이용한 한국어 혐오 표현 탐지] 6. 데이터 증강
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-6.-%EB%8D%B0%EC%9D%B4%ED%84%B0-%EC%A6%9D%EA%B0%95

[PLM을 이용한 한국어 혐오 표현 탐지] 7. 증강 데이터 활용하기
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-7.-%EC%A6%9D%EA%B0%95-%EB%8D%B0%EC%9D%B4%ED%84%B0-%ED%99%9C%EC%9A%A9%ED%95%98%EA%B8%B0

[PLM을 이용한 한국어 혐오 표현 탐지] 8. 증강 데이터 활용하기2
https://velog.io/@seoyeon96/PLM%EC%9D%84-%EC%9D%B4%EC%9A%A9%ED%95%9C-%ED%95%9C%EA%B5%AD%EC%96%B4-%ED%98%90%EC%98%A4-%ED%91%9C%ED%98%84-%ED%83%90%EC%A7%80-8.-%EC%A6%9D%EA%B0%95-%EB%8D%B0%EC%9D%B4%ED%84%B0-%ED%99%9C%EC%9A%A9%ED%95%98%EA%B8%B02










>