Home Health A big language mannequin for digital well being information – npj Digital Medicine

A big language mannequin for digital well being information – npj Digital Medicine

0
A big language mannequin for digital well being information – npj Digital Medicine

[ad_1]

  • Adoption of Electronic Health Record Systems amongst U.S. Non-Federal Acute Care Hospitals: 2008–2015. ONC Data Brief. https://www.healthit.gov/sites/default/files/briefs/2015_hospital_adoption_db_v17.pdf (2016).

  • Adler-Milstein, J. et al. Electronic well being file adoption in US hospitals: the emergence of a digital ‘advanced use’ divide. J. Am. Med. Inform. Assoc. 24, 1142–1148 (2017).

    Article 

    Google Scholar
     

  • Bush, R. A., Kuelbs, C. L., Ryu, J., Jian, W. & Chiang, G. J. Structured knowledge entry within the digital medical file: views of pediatric specialty physicians and surgeons. J. Med. Syst. 41, 1–8 (2017).

    Article 

    Google Scholar
     

  • Meystre, S. M., Savova, G. Ok., Kipper-Schuler, Ok. C. & Hurdle, J. F. Extracting info from textual paperwork within the digital well being file: a evaluate of current analysis. Yearb. Med. Inform. 17, 128–144 (2008).

    Article 

    Google Scholar
     

  • Liang, H. et al. Evaluation and correct diagnoses of pediatric ailments utilizing synthetic intelligence. Nat. Med. 25, 433–438 (2019).

    Article 
    CAS 

    Google Scholar
     

  • Yang, J. et al. Assessing the prognostic significance of tumor-infiltrating lymphocytes in sufferers with melanoma utilizing pathologic options recognized by pure language processing. JAMA Netw. Open 4, e2126337 (2021).

    Article 

    Google Scholar
     

  • Nadkarni, P. M., Ohno-Machado, L. & Chapman, W. W. Natural language processing: an introduction. J. Am. Med. Inform. Assoc. 18, 544–551 (2011).

    Article 

    Google Scholar
     

  • LeCun, Y., Bengio, Y. & Hinton, G. Deep studying. Nature 521, 436–444 (2015).

    Article 
    CAS 

    Google Scholar
     

  • Collobert, R. et al. Natural language processing (virtually) from scratch. J. Mach. Learn Res. 12, 2493–2537 (2011).


    Google Scholar
     

  • Lample, G., Ballesteros, M., Subramanian, S., Kawakami, Ok. & Dyer, C. Neural architectures for named entity recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 260–270 (2016).

  • Lee, J. et al. BioBERT: a pre-trained biomedical language illustration mannequin for biomedical textual content mining. Bioinformatics. 36, 1234–1240 (2020).

    CAS 

    Google Scholar
     

  • Vaswani, A. et al. Attention is All you Need. Advances in Neural Information Processing Systems. 30 (2017).

  • Wang, A. et al. GLUE: A multi-task benchmark and evaluation platform for pure language understanding. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 353–355 (2018).

  • Wang, A. et al. SuperGLUE: a stickier benchmark for general-purpose language understanding techniques. Advances in neural info processing techniques. 32 (2019).

  • Qiu, X. et al. Pre-trained fashions for pure language processing: a survey. Science China Technological Sciences. 63, 1872–1897 (2020).

    Article 

    Google Scholar
     

  • Tay, Y., Dehghani, M., Bahri, D. & Metzler, D. Efficient transformers: a survey. ACM Computing Surveys. 55, 1–28 (2020).

    Article 

    Google Scholar
     

  • Yu, J., Bohnet, B. & Poesio, M. Named entity recognition as dependency parsing. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 6470–6476 (2020).

  • Yamada, I., Asai, A., Shindo, H., Takeda, H. & Matsumoto, Y. LUKE: deep contextualized entity representations with entity-aware self-attention. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 6442–6454 (2020).

  • Li, X. et al. Dice loss for data-imbalanced NLP duties. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 465–476 (2020).

  • Xu, B., Wang, Q., Lyu, Y., Zhu, Y. & Mao, Z. Entity construction inside and all through: modeling point out dependencies for document-level relation extraction. Proceedings of the AAAI Conference on Artificial Intelligence 35, 14149–14157 (2021).

    Article 

    Google Scholar
     

  • Ye, D., Lin, Y. & Sun, M. Pack collectively: entity and relation extraction with levitated marker. Proceedings of the sixtieth Annual Meeting of the Association for Computational Linguistics. 1, 4904–4917 (2021).

  • Cohen, A. D., Rosenman, S. & Goldberg, Y. Relation classification as two-way span-prediction. ArXiv arXiv:2010.04829 (2021).

  • Lyu, S. & Chen, H. Relation classification with entity kind restriction. Findings of the Association for Computational Linguistics: ACL-IJCNLP. 390–395 (2021).

  • Wang, J. & Lu, W. Two are higher than one: joint entity and relation extraction with table-sequence encoders. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 1706–1721 (2020).

  • Jiang, H. et al. SMART: Robust and environment friendly fine-tuning for pre-trained pure language fashions by principled regularized optimization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2177–2190 (2020).

  • Yang, Z. et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding. Proceedings of the thirty third International Conference on Neural Information Processing Systems. 5753–5763 (2019).

  • Raffel, C. et al. Exploring the boundaries of switch studying with a unified text-to-text transformer. J. Mach. Learn. Res. 21, 1–67 (2019).


    Google Scholar
     

  • Lan, Z.-Z. et al. ALBERT: a lite BERT for self-supervised studying of language representations. ArXiv arXiv:1909.11942 (2019).

  • Wang, S., Fang, H., Khabsa, M., Mao, H. & Ma, H. Entailment as Few-Shot Learner. ArXiv arXiv:2104.14690 (2021).

  • Zhang, Z. et al. Semantics-aware BERT for language understanding. Proceedings of the AAAI Conference on Artificial Intelligence. 34, 9628-9635 (2020).

  • Zhang, Z., Yang, J. & Zhao, H. Retrospective reader for machine studying comprehension. Proceedings of the AAAI Conference on Artificial Intelligence. 35, 14506-14514 (2021).

  • Garg, S., Vu, T. & Moschitti, A. TANDA: switch and adapt pre-trained transformer fashions for reply sentence choice. Proceedings of the AAAI Conference on Artificial Intelligence. 34, 7780-7788 (2020).

  • Bommasani, R. et al. On the alternatives and dangers of basis fashions. ArXiv arXiv:2108.07258 (2021).

  • Floridi, L. & Chiriatti, M. GPT-3: its nature, scope, limits, and penalties. Minds Mach 30, 681–694 (2020).

    Article 

    Google Scholar
     

  • Gu, Y. et al. Domain-specific language mannequin pretraining for biomedical pure language processing. ACM Trans. Comput. Healthc. 3, 1–23 (2022).

    Article 

    Google Scholar
     

  • Shin, H.-C. et al. BioMegatron: bigger biomedical area language mannequin. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 4700–4706 (2020).

  • Alsentzer, E. et al. Publicly Available Clinical BERT Embeddings. in Proc. 2nd Clinical Natural Language Processing Workshop 72–78 (2019).

  • Johnson, A. E. W. et al. MIMIC-III, a freely accessible crucial care database. Sci. Data 3, 160035 (2016).

    Article 
    CAS 

    Google Scholar
     

  • Uzuner, Ö., South, B. R., Shen, S. & DuVall, S. L. 2010 i2b2/VA problem on ideas, assertions, and relations in scientific textual content. J. Am. Med. Inform. Assoc. 18, 552–556 (2011).

    Article 

    Google Scholar
     

  • Sun, W., Rumshisky, A. & Uzuner, O. Evaluating temporal relations in scientific textual content: 2012 i2b2 Challenge. J. Am. Med. Inform. Assoc. 20, 806–813 (2013).

    Article 

    Google Scholar
     

  • Yang, X. et al. Identifying relations of medicines with adversarial drug occasions utilizing recurrent convolutional neural networks and gradient boosting. J. Am. Med. Inform. Assoc. 27, 65–72 (2020).

    Article 

    Google Scholar
     

  • Yang, X. et al. A examine of deep studying strategies for de-identification of scientific notes in cross-institute settings. BMC Med. Inform. Decis. Mak. 19, 232 (2019).

    Article 

    Google Scholar
     

  • Shoeybi, M. et al. Megatron-LM: coaching multi-billion parameter language fashions utilizing mannequin parallelism. ArXiv arXiv:1909.08053 (2020).

  • Levine, Y., Wies, N., Sharir, O., Bata, H. & Shashua, A. Limits to depth efficiencies of self-attention. Advances in Neural Information Processing Systems 33, 22640–22651 (2020).


    Google Scholar
     

  • Sennrich, R., Haddow, B. & Birch, A. Neural Machine Translation of Rare Words with Subword Units. in Proc. 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 1715–1725 (Association for Computational Linguistics, 2016).

  • Devlin, J., Chang, M.-W., Lee, Ok. & Toutanova, Ok. BERT: pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171–4186 (2019).

  • Wu, Y., Xu, J., Jiang, M., Zhang, Y. & Xu, H. A examine of neural phrase embeddings for named entity recognition in scientific textual content. Amia. Annu. Symp. Proc. 2015, 1326–1333 (2015).


    Google Scholar
     

  • Soysal, E. et al. CLAMP—a toolkit for effectively constructing personalized scientific pure language processing pipelines. J. Am. Med. Inform. Assoc. 25, 331–336 (2018).

    Article 

    Google Scholar
     

  • Wu, Y., Jiang, M., Lei, J. & Xu, H. Named entity recognition in chinese language scientific textual content utilizing deep neural community. Stud. Health Technol. Inform. 216, 624–628 (2015).


    Google Scholar
     

  • Wu, Y. et al. Combine factual medical data and distributed phrase illustration to enhance scientific named entity recognition. in AMIA Annual Symposium Proceedings vol. 2018, 1110 (American Medical Informatics Association, 2018).

  • Yang, X. et al. Identifying relations of medicines with adversarial drug occasions utilizing recurrent convolutional neural networks and gradient boosting. J. Am. Med. Inform. Assoc. 27, 65–72 (2020).

    Article 

    Google Scholar
     

  • Kumar, S. A survey of deep studying strategies for relation extraction. ArXiv arXiv:1705.03645 (2017).

  • Lv, X., Guan, Y., Yang, J. & Wu, J. Clinical relation extraction with deep studying. Int. J. Hybrid. Inf. Technol. 9, 237–248 (2016).


    Google Scholar
     

  • Wei, Q. et al. Relation extraction from scientific narratives utilizing pre-trained language fashions. Amia. Annu. Symp. Proc. 2019, 1236–1245 (2020).


    Google Scholar
     

  • Guan, H. & Devarakonda, M. Leveraging contextual info in extracting lengthy distance relations from scientific notes. Amia. Annu. Symp. Proc. 2019, 1051–1060 (2020).


    Google Scholar
     

  • Alimova, I. & Tutubalina, E. Multiple options for scientific relation extraction: a machine studying method. J. Biomed. Inform. 103, 103382 (2020).

    Article 

    Google Scholar
     

  • Mahendran, D. & McInnes, B. T. Extracting adversarial drug occasions from scientific notes. AMIA Summits on Translational Science Proceedings. 420–429 (2021).

  • Yang, X., Zhang, H., He, X., Bian, J. & Wu, Y. Extracting household historical past of sufferers from scientific narratives: exploring an end-to-end answer with deep studying fashions. JMIR Med. Inform. 8, e22982 (2020).

    Article 

    Google Scholar
     

  • Yang, X., Yu, Z., Guo, Y., Bian, J. & Wu, Y. Clinical Relation Extraction Using Transformer-based Models. ArXiv. arXiv:2107.08957 (2021).

  • Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I. & Specia, L. Semeval-2017 activity 1: Semantic textual similarity-multilingual and cross-lingual targeted analysis. Proceedings of the eleventh International Workshop on Semantic Evaluation (SemEval-2017). 1–14 (2017).

  • Farouk, M. Measuring sentences similarity: a survey. ArXiv arXiv:1910.03940 (2019).

  • Ramaprabha, J., Das, S. & Mukerjee, P. Survey on sentence similarity analysis utilizing deep studying. J. Phys. Conf. Ser. 1000, 012070 (2018).

    Article 

    Google Scholar
     

  • Gomaa, W. H. & Fahmy, A. A survey of textual content similarity approaches. International journal of Computer Applications 68, 13–18 (2013).

    Article 

    Google Scholar
     

  • Wang, Y. et al. MedSTS: a useful resource for scientific semantic textual similarity. Lang. Resour. Eval. 54, 57–72 (2020).

    Article 

    Google Scholar
     

  • Rastegar-Mojarad, M. et al. BioCreative/OHNLP Challenge 2018. in Proc. 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics 575–575 (ACM, 2018).

  • Wang, Y. et al. Overview of the 2019 n2c2/OHNLP observe on scientific semantic textual similarity. JMIR Med. Inform. 8, e23375 (2020).

    Article 

    Google Scholar
     

  • Mahajan, D. et al. Identification of semantically comparable sentences in scientific notes: iterative intermediate coaching utilizing multi-task studying. JMIR Med. Inform. 8, e22508 (2020).

    Article 

    Google Scholar
     

  • Dagan, I., Glickman, O. & Magnini, B. in Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment (eds. Quiñonero-Candela, J., Dagan, I., Magnini, B. & d’Alché-Buc, F.) 177–190 (Springer Berlin Heidelberg, 2006).

  • Williams, A., Nangia, N. & Bowman, S. R. A broad-coverage problem corpus for sentence understanding by inference. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1, 1112–1122 (2018).

  • Bowman, S. R., Angeli, G., Potts, C. & Manning, C. D. A big annotated corpus for studying pure language inference. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 632–642 (2015).

  • Shivade, C. MedNLI—a pure language inference dataset for the scientific area. PhysioNet https://doi.org/10.13026/C2RS98 (2017).

    Article 

    Google Scholar
     

  • Conneau, A., Kiela, D., Schwenk, H., Barrault, L. & Bordes, A. Supervised studying of common sentence representations from pure language inference knowledge. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 670–680 (2017).

  • Rajpurkar, P., Zhang, J., Lopyrev, Ok. & Liang, P. SQuAD: 100,000+ questions for machine comprehension of textual content. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2383–2392 (2016).

  • Rajpurkar, P., Jia, R. & Liang, P. Know what you don’t know: unanswerable questions for SQuAD. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics 2, 784–789 (2018).


    Google Scholar
     

  • Zhu, M., Ahuja, A., Juan, D.-C., Wei, W. & Reddy, C. Ok. Question Answering with Long Multiple-Span Answers. in Findings of the Association for Computational Linguistics: EMNLP 2020 3840–3849 (Association for Computational Linguistics, 2020).

  • Ben Abacha, A. & Demner-Fushman, D. A matter-entailment method to query answering. BMC Bioinforma 20, 511 (2019).

    Article 

    Google Scholar
     

  • Pampari, A., Raghavan, P., Liang, J. & Peng, J. emrQA: a big corpus for query answering on digital medical information. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2357–2368 (2018).

  • Yue, X., Gutierrez, B. J. & Sun, H. Clinical studying comprehension: a radical evaluation of the emrQA dataset. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 4474–4486 (2020).

  • [adinserter block=”4″]

    [ad_2]

    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here