Acronyms - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Acronyms

The Experts below are selected from a list of 70917 Experts worldwide ranked by ideXlab platform

Acronyms – Free Register to Access Experts & Abstracts

Genevieve B Melton – One of the best experts on this subject based on the ideXlab platform.

  • challenges and practical approaches with word sense disambiguation of Acronyms and abbreviations in the clinical domain
    Healthcare Informatics Research, 2015
    Co-Authors: Sungrim Moon, Bridget T Mcinnes, Genevieve B Melton
    Abstract:

    OBJECTIVES Although Acronyms and abbreviations in clinical text are used widely on a daily basis, relatively little research has focused upon word sense disambiguation (WSD) of Acronyms and abbreviations in the healthcare domain. Since clinical notes have distinctive characteristics, it is unclear whether techniques effective for acronym and abbreviation WSD from biomedical literature are sufficient. METHODS The authors discuss feature selection for automated techniques and challenges with WSD of Acronyms and abbreviations in the clinical domain. RESULTS There are significant challenges associated with the informal nature of clinical text, such as typographical errors and incomplete sentences; difficulty with insufficient clinical resources, such as clinical sense inventories; and obstacles with privacy and security for conducting research with clinical text. Although we anticipated that using sophisticated techniques, such as biomedical terminologies, semantic types, part-of-speech, and language modeling, would be needed for feature selection with automated machine learning approaches, we found instead that simple techniques, such as bag-of-words, were quite effective in many cases. Factors, such as majority sense prevalence and the degree of separateness between sense meanings, were also important considerations. CONCLUSIONS The first lesson is that a comprehensive understanding of the unique characteristics of clinical text is important for automatic acronym and abbreviation WSD. The second lesson learned is that investigators may find that using simple approaches is an effective starting point for these tasks. Finally, similar to other WSD tasks, an understanding of baseline majority sense rates and separateness between senses is important. Further studies and practical solutions are needed to better address these issues.

  • a sense inventory for clinical abbreviations and Acronyms created using clinical notes and medical dictionary resources
    Journal of the American Medical Informatics Association, 2014
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Nathan Liu, James Owen Ryan, Genevieve B Melton
    Abstract:

    Objective To create a sense inventory of abbreviations and Acronyms from clinical texts. Methods The most frequently occurring abbreviations and Acronyms from 352 267 dictated clinical notes were used to create a clinical sense inventory. Senses of each abbreviation and acronym were manually annotated from 500 random instances and lexically matched with long forms within the Unified Medical Language System (UMLS V.2011AB), Another Database of Abbreviations in Medline (ADAM), and Stedman’s Dictionary, Medical Abbreviations, Acronyms & Symbols, 4th edition ( Stedman’s ). Redundant long forms were merged after they were lexically normalized using Lexical Variant Generation (LVG). Results The clinical sense inventory was found to have skewed sense distributions, practice-specific senses, and incorrect uses. Of 440 abbreviations and Acronyms analyzed in this study, 949 long forms were identified in clinical notes. This set was mapped to 17 359, 5233, and 4879 long forms in UMLS, ADAM, and Stedman’s , respectively. After merging long forms, only 2.3% matched across all medical resources. The UMLS, ADAM, and Stedman’s covered 5.7%, 8.4%, and 11% of the merged clinical long forms, respectively. The sense inventory of clinical abbreviations and Acronyms and anonymized datasets generated from this study are available for public use at (‘Sense Inventories’, website). Conclusions Clinical sense inventories of abbreviations and Acronyms created using clinical notes and medical dictionary resources demonstrate challenges with term coverage and resource integration. Further work is needed to help with standardizing abbreviations and Acronyms in clinical care and biomedicine to facilitate automated processes such as text-mining and information extraction.

  • automated disambiguation of Acronyms and abbreviations in clinical texts window and training size considerations
    American Medical Informatics Association Annual Symposium, 2012
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Genevieve B Melton
    Abstract:

    Acronyms and abbreviations within electronic clinical texts are widespread and often associated with multiple senses. Automated acronym sense disambiguation (WSD), a task of assigning the context-appropriate sense to ambiguous clinical Acronyms and abbreviations, represents an active problem for medical natural language processing (NLP) systems. In this paper, fifty clinical Acronyms and abbreviations with 500 samples each were studied using supervised machine-learning techniques (Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT)) to optimize the window size and orientation and determine the minimum training sample size needed for optimal performance. Our analysis of window size and orientation showed best performance using a larger left-sided and smaller right-sided window. To achieve an accuracy of over 90%, the minimum required training sample size was approximately 125 samples for SVM classifiers with inverted cross-validation. These findings support future work in clinical acronym and abbreviation WSD and require validation with other clinical texts.

Sungrim Moon – One of the best experts on this subject based on the ideXlab platform.

  • challenges and practical approaches with word sense disambiguation of Acronyms and abbreviations in the clinical domain
    Healthcare Informatics Research, 2015
    Co-Authors: Sungrim Moon, Bridget T Mcinnes, Genevieve B Melton
    Abstract:

    OBJECTIVES Although Acronyms and abbreviations in clinical text are used widely on a daily basis, relatively little research has focused upon word sense disambiguation (WSD) of Acronyms and abbreviations in the healthcare domain. Since clinical notes have distinctive characteristics, it is unclear whether techniques effective for acronym and abbreviation WSD from biomedical literature are sufficient. METHODS The authors discuss feature selection for automated techniques and challenges with WSD of Acronyms and abbreviations in the clinical domain. RESULTS There are significant challenges associated with the informal nature of clinical text, such as typographical errors and incomplete sentences; difficulty with insufficient clinical resources, such as clinical sense inventories; and obstacles with privacy and security for conducting research with clinical text. Although we anticipated that using sophisticated techniques, such as biomedical terminologies, semantic types, part-of-speech, and language modeling, would be needed for feature selection with automated machine learning approaches, we found instead that simple techniques, such as bag-of-words, were quite effective in many cases. Factors, such as majority sense prevalence and the degree of separateness between sense meanings, were also important considerations. CONCLUSIONS The first lesson is that a comprehensive understanding of the unique characteristics of clinical text is important for automatic acronym and abbreviation WSD. The second lesson learned is that investigators may find that using simple approaches is an effective starting point for these tasks. Finally, similar to other WSD tasks, an understanding of baseline majority sense rates and separateness between senses is important. Further studies and practical solutions are needed to better address these issues.

  • a sense inventory for clinical abbreviations and Acronyms created using clinical notes and medical dictionary resources
    Journal of the American Medical Informatics Association, 2014
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Nathan Liu, James Owen Ryan, Genevieve B Melton
    Abstract:

    Objective To create a sense inventory of abbreviations and Acronyms from clinical texts. Methods The most frequently occurring abbreviations and Acronyms from 352 267 dictated clinical notes were used to create a clinical sense inventory. Senses of each abbreviation and acronym were manually annotated from 500 random instances and lexically matched with long forms within the Unified Medical Language System (UMLS V.2011AB), Another Database of Abbreviations in Medline (ADAM), and Stedman’s Dictionary, Medical Abbreviations, Acronyms & Symbols, 4th edition ( Stedman’s ). Redundant long forms were merged after they were lexically normalized using Lexical Variant Generation (LVG). Results The clinical sense inventory was found to have skewed sense distributions, practice-specific senses, and incorrect uses. Of 440 abbreviations and Acronyms analyzed in this study, 949 long forms were identified in clinical notes. This set was mapped to 17 359, 5233, and 4879 long forms in UMLS, ADAM, and Stedman’s , respectively. After merging long forms, only 2.3% matched across all medical resources. The UMLS, ADAM, and Stedman’s covered 5.7%, 8.4%, and 11% of the merged clinical long forms, respectively. The sense inventory of clinical abbreviations and Acronyms and anonymized datasets generated from this study are available for public use at (‘Sense Inventories’, website). Conclusions Clinical sense inventories of abbreviations and Acronyms created using clinical notes and medical dictionary resources demonstrate challenges with term coverage and resource integration. Further work is needed to help with standardizing abbreviations and Acronyms in clinical care and biomedicine to facilitate automated processes such as text-mining and information extraction.

  • automated disambiguation of Acronyms and abbreviations in clinical texts window and training size considerations
    American Medical Informatics Association Annual Symposium, 2012
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Genevieve B Melton
    Abstract:

    Acronyms and abbreviations within electronic clinical texts are widespread and often associated with multiple senses. Automated acronym sense disambiguation (WSD), a task of assigning the context-appropriate sense to ambiguous clinical Acronyms and abbreviations, represents an active problem for medical natural language processing (NLP) systems. In this paper, fifty clinical Acronyms and abbreviations with 500 samples each were studied using supervised machine-learning techniques (Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT)) to optimize the window size and orientation and determine the minimum training sample size needed for optimal performance. Our analysis of window size and orientation showed best performance using a larger left-sided and smaller right-sided window. To achieve an accuracy of over 90%, the minimum required training sample size was approximately 125 samples for SVM classifiers with inverted cross-validation. These findings support future work in clinical acronym and abbreviation WSD and require validation with other clinical texts.

Serguei V S Pakhomov – One of the best experts on this subject based on the ideXlab platform.

  • a sense inventory for clinical abbreviations and Acronyms created using clinical notes and medical dictionary resources
    Journal of the American Medical Informatics Association, 2014
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Nathan Liu, James Owen Ryan, Genevieve B Melton
    Abstract:

    Objective To create a sense inventory of abbreviations and Acronyms from clinical texts. Methods The most frequently occurring abbreviations and Acronyms from 352 267 dictated clinical notes were used to create a clinical sense inventory. Senses of each abbreviation and acronym were manually annotated from 500 random instances and lexically matched with long forms within the Unified Medical Language System (UMLS V.2011AB), Another Database of Abbreviations in Medline (ADAM), and Stedman’s Dictionary, Medical Abbreviations, Acronyms & Symbols, 4th edition ( Stedman’s ). Redundant long forms were merged after they were lexically normalized using Lexical Variant Generation (LVG). Results The clinical sense inventory was found to have skewed sense distributions, practice-specific senses, and incorrect uses. Of 440 abbreviations and Acronyms analyzed in this study, 949 long forms were identified in clinical notes. This set was mapped to 17 359, 5233, and 4879 long forms in UMLS, ADAM, and Stedman’s , respectively. After merging long forms, only 2.3% matched across all medical resources. The UMLS, ADAM, and Stedman’s covered 5.7%, 8.4%, and 11% of the merged clinical long forms, respectively. The sense inventory of clinical abbreviations and Acronyms and anonymized datasets generated from this study are available for public use at (‘Sense Inventories’, website). Conclusions Clinical sense inventories of abbreviations and Acronyms created using clinical notes and medical dictionary resources demonstrate challenges with term coverage and resource integration. Further work is needed to help with standardizing abbreviations and Acronyms in clinical care and biomedicine to facilitate automated processes such as text-mining and information extraction.

  • automated disambiguation of Acronyms and abbreviations in clinical texts window and training size considerations
    American Medical Informatics Association Annual Symposium, 2012
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Genevieve B Melton
    Abstract:

    Acronyms and abbreviations within electronic clinical texts are widespread and often associated with multiple senses. Automated acronym sense disambiguation (WSD), a task of assigning the context-appropriate sense to ambiguous clinical Acronyms and abbreviations, represents an active problem for medical natural language processing (NLP) systems. In this paper, fifty clinical Acronyms and abbreviations with 500 samples each were studied using supervised machine-learning techniques (Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT)) to optimize the window size and orientation and determine the minimum training sample size needed for optimal performance. Our analysis of window size and orientation showed best performance using a larger left-sided and smaller right-sided window. To achieve an accuracy of over 90%, the minimum required training sample size was approximately 125 samples for SVM classifiers with inverted cross-validation. These findings support future work in clinical acronym and abbreviation WSD and require validation with other clinical texts.

  • abbreviation and acronym disambiguation in clinical discourse
    American Medical Informatics Association Annual Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.

Tsung O. Cheng – One of the best experts on this subject based on the ideXlab platform.

  • The use of coercive trial Acronyms should be discouraged.
    International journal of cardiology, 2012
    Co-Authors: Tsung O. Cheng
    Abstract:

    Physicians, especially cardiologists, like to use or invent Acronyms [1–8]. All the current medical journals, especially cardiological journals, continue to be filled with Acronyms. New Acronyms are being invented every day, especially for cardiological trials. The use of Acronyms is sometimes necessary to simplify and facilitate modern communication in our highly technical world, especially to avoid repetition of long, unwieldy, breath-catching and space-occupying trial names in a scientific publication [8]. A trial acronym is particularly advantageous to the participating investigators of the trial who, bymerelymentioning the acronym, can be instantly referred to the appropriate staff to answer any questionswhen they call a trial center to register a potential patient [9]. Cardiologists are most imaginative in creating Acronyms for clinical trials, some of which may be intended to be prophetic, as in HOPE and CONSENSUS [8]. Such Acronyms not only tend to attract more patients for the trial but also make the trial more likely to be funded by many a granting agency, especially the pharmaceutical industry [10]. Unfortunately, they were not more likely to report positive results [10]. There are many studies with positive-sounding Acronyms that turn out to yield negative results [11–13]. Examples are ALIVE, ATLAS, BEAUTIFUL, CARDINAL, CHAMPION, CRESCENDO, DEFINITE, HALT-MI, HF-ACTION, IMPROVED, I-PRESERVE, LIMIT AMI, MASTER, MIRACLE, OPTIMIST, PROMISE, PROVE IT, SAVED, SUCCESS, and SWORD (for definitions of these trial Acronyms, please consult Refs. 8,11–14). The latest entries of a positive-sounding trial acronym that turned out to be a negative study are ONTARGET [15], ACCORD [16] and AIM-HIGH [17]. Then there are nationalistic Acronyms of clinical trials that tend or intend to call attention to the countries or cities of origin of the trials.

  • CALORIE is a better acronym than CALERIE.
    International journal of cardiology, 2006
    Co-Authors: Tsung O. Cheng
    Abstract:

    The investigators in the recently published article on the effects of calorie restriction in overweight individuals coined the acronym CALERIE for the Comprehensive Assessment of the Long Term Effects of Reducing Intake of Energy [1]. But I think that it would be far better to give the study the acronym CALORIE for Comprehensive Assessment of the Long term effects Of Reducing Intake of Energy. After all, the study is all about calories. Acronyms of clinical trials are often very imaginative, intuitive and sometimes even coercive or prophetic [2]. In the latter case, the investigators attempted to draw special attention of the readers to the purpose and/or results of the trials, e.g., ACTION (Anticoagulation Consortium To Improve Outcomes Nationally), HOPE (Heart Outcomes Prevention Evaluation), RAPID (Rapid Anticoagulation Preventing Ischemic Damage) and WISH (Women Into Staying Healthy). The readers are advised to refer to Ref. [2] for a complete listing of the Acronyms of cardiological trials and their definitions. On the other hand, there are many positive-sounding Acronyms of trials in cardiology that turned out to yield negative results, such as ALIVE, ATLAS, CARDINAL, DEFINITE, HALT-MI, IMPROVED, LIMIT AMI, MIRA-

  • clinical trial registration should also include trial Acronyms
    International Journal of Cardiology, 2006
    Co-Authors: Tsung O. Cheng
    Abstract:

    Recently, WHO [1] proposed and the International Committee of Medical Journal Editors [2] stated that all clinical trials should be registered. Unfortunately, neither listed registration of the Acronyms of the trials as a requirement. Acronyms of clinical trials are flooding the medical literature [3]. They are used by not only investigators and physicians but also patients and press. Many Acronyms are shared by multiple, unrelated trials, e.g., HEART, by 17 different trials; IMPACT, by 13 different trials; STOP, by 11 different trials; SMART and START, each by 10 different trials; BEST, CARE and PACT, each by 9 different trials; and PRIME, by 8 different trials [3,4]. The problem of multiple trials, often unrelated, sharing the same acronym is serious and will get worse. Therefore, maintaining a trial

Christopher G. Chute – One of the best experts on this subject based on the ideXlab platform.

  • AMIA – Abbreviation and acronym disambiguation in clinical discourse.
    AMIA … Annual Symposium proceedings. AMIA Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.

  • abbreviation and acronym disambiguation in clinical discourse
    American Medical Informatics Association Annual Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.