勝田 哲弘‎ > ‎

文献紹介

学部3年
第一回:商品の属性値抽出タスクにおけるエラー分析 [pdf][slide]
第二回:Web 情報からの罹患検出を対象とした事実性解析・主体解析の誤り分析 [pdf][slide]

学部4年
第一回:シソーラスを組み込んだ意味解析システム [pdf][slide]
第二回:意味集約における相対的特徴を考慮した評価視点の構造化 [pdf][slide]
第三回:Building_a_Monolingual_Parallel_Corpus_for_Text_Simplification_Using [pdf][slide]
第四回:An Analysis of Crowdsourced Text Simplifications [pdf][slide]
第五回:The Language Demographics of Amazon Mechanical Turk [pdf][slide]
第六回:What Substitutes Tell Us-Analysis of an “All-Words” Lexical Substitution Corpus [pdf][slide]

修士1年
第一回:Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging [pdf] [slide]
第二回:End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF [pdf] [slide]
第三回:Understanding the Lexical Simplification Needs of Non-Native Speakers of English [pdf] [slide]
第四回:Dict2vec: Learning Word Embeddings using Lexical Dictionaries [pdf] [slide]
第五回:Deep contextualized word representations [pdf] [slide]
第六回:The Importance of Subword Embeddings in Sentence Pair Modeling [pdf] [slide]
第七回:Segmentation-Free Word Embedding for Unsegmented Languages ∗ [pdf] [slide]
第八回:When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation? [pdf] [slide]
第九回:Intrinsic Evaluation of Word Vectors Fails to Predict Extrinsic Performance [pdf] [slide]
第十回:Split and Rephrase: Better Evaluation and a Stronger Baseline [pdf] [slide]
第十一回:How Transferable are Neural Networks in NLP Applications? [pdf] [slide]
第十二回:Phrase-level Self-Attention Networks  for Universal Sentence Encoding [pdf] [slide]
第十三回:Named Entity Recognition With Parallel Recurrent Neural Networks [pdf] [slide]
第十四回:Unsupervised Statistical Machine Translation [pdf] [slide]
第十五回:Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms [pdf] [slide]
第十六回:DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction [pdf] [slide]
第十七回:A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings [pdf] [slide]
第十八回:Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications [pdf] [slide]
第十九回:Better Word Embeddings by Disentangling Contextual n-Gram Information [pdf] [slide]
第二〇回:Improving Word Embeddings Using Kernel PCA [pdf] [slide]
第二一回:Character Eyes: Seeing Language through Character-Level Taggers [pdf] [slide]
第二二回:Retrofitting Contextualized Word Embeddings with Paraphrases [pdf] [slide]
第二三回:Simple task-specific bilingual word embeddings [pdf] [slide]
第二四回:Simple and Effective Paraphrastic Similarity from Parallel Translations [pdf] [slide]
第二五回:What does BERT learn about the structure of language? [pdf] [slide]


サブページ (1): 文献調査
Comments