Aplicação da arquitetura transformer para sumarização de artigoscientíficos
Carregando...
Arquivos
Citações na Scopus
Tipo de produção
Dissertação
Data
2023
Autores
Lima, Amanda Maciel de
Orientador
Rodrigues, Paulo Sérgio Silva
Periódico
Título da Revista
ISSN da Revista
Título de Volume
Citação
LIMA, Amanda Maciel de. Aplicação da arquitetura transformer para sumarização de artigoscientíficos. 2023. 96 f. Dissertação (Mestrado em Engenharia Elétrica) - Centro Universitário FEI, São Bernardo do Campo, 2023. Disponível em: https://doi.org/10.31414/EE.2023.D.131625.
Texto completo (DOI)
Palavras-chave
Processamento de linguagem natural (Computação),Sumarização Abstrativa de Texto,Artigos científicos,Arquitetura transformer,Longformer
Resumo
O processo de pesquisa científica tem como sua fase inicial a exploração de artigos para o conhecimento do estado da arte do tema a ser investigado. Em virtude do crescimento de dados
em artigos científicos e do curso constante da informatização, tornam-se necessários mecanismos
que sejam capazes de resumir artigos científicos com a finalidade de melhorar o processo
de aquisição de pesquisas e direcionar a pessoa pesquisadora a acessar conteúdos relevantes.
Os trabalhos de sumarização de artigos científicos, de modo geral, apresentam métodos de relevância de sentenças e aprendizado de máquina. Nos últimos anos, mecanismos de atenção
associados a redes neurais e processamento de linguagem natural vêm sendo propostos para interpretare contextualizar atividades de processamento de linguagens, sendo uma delas a textual.
Paralelamente, a arquitetura Transformer sugere uma modelagem de transdução com mecanismos
de autoatenção - prescindindo de convoluções e recorrências - é aplicada a diversos campos
da Inteligência Articial com resultados considerados promissores. Este trabalho propôs empregar
o modelo pré-treinado Longformer para a atividade de sumarização de artigos científicos da
base de dados SciSummNet através de etapas de pré-processamento, fine-tuning e geração dos
resumos. Os resultados obtidos indicaram melhoria de 20,8% para ROUGE-2 recall e 22,69%
para ROUGE-2 F-Measure em relação ao trabalho original da base SciSummNet através do modelo
ComAbstract
The scientific research process has as its initial phase the exploration of articles for the knowledge of the state of the art of the theme to be investigated. Due to the growth of data in scientific articles and the constant course of computerization, mechanisms that are capable of summarizing scientific articles become necessary in order to improve the research acquisition process and direct the researcher to access relevant content. Scientific articles summarizing works, in general, present sentence relevance and machine learning methods. In recent years, attention mechanisms associated with neural networks and natural language processing have been proposed to interpret and contextualize language processing activities, one of which is textual. In recent years, attention mechanisms associated with neural networks and natural language processing have been proposed to interpret and contextualize language processing activities, one of which is textual. At the same time, the Transformer architecture suggests a transduction modeling with self-attention mechanisms - dispensing with convolutions and recurrences - is applied to several fields of Artificial Intelligence with results considered promising. This work proposes to use the Longformer pre-trained model for summarizing scientific articles from the SciSummNet database through pre-processing, fine-tuning and summary generation steps. The results obtained indicated an improvement of 20.8% for ROUGE-2 recall and 22.69% for ROUGE-2 F-Measure in relation to the original work of the base SciSummNet through the variation model called WithAbstract
The scientific research process has as its initial phase the exploration of articles for the knowledge of the state of the art of the theme to be investigated. Due to the growth of data in scientific articles and the constant course of computerization, mechanisms that are capable of summarizing scientific articles become necessary in order to improve the research acquisition process and direct the researcher to access relevant content. Scientific articles summarizing works, in general, present sentence relevance and machine learning methods. In recent years, attention mechanisms associated with neural networks and natural language processing have been proposed to interpret and contextualize language processing activities, one of which is textual. In recent years, attention mechanisms associated with neural networks and natural language processing have been proposed to interpret and contextualize language processing activities, one of which is textual. At the same time, the Transformer architecture suggests a transduction modeling with self-attention mechanisms - dispensing with convolutions and recurrences - is applied to several fields of Artificial Intelligence with results considered promising. This work proposes to use the Longformer pre-trained model for summarizing scientific articles from the SciSummNet database through pre-processing, fine-tuning and summary generation steps. The results obtained indicated an improvement of 20.8% for ROUGE-2 recall and 22.69% for ROUGE-2 F-Measure in relation to the original work of the base SciSummNet through the variation model called WithAbstract