Acronyms

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 70917 Experts worldwide ranked by ideXlab platform

Genevieve B Melton - One of the best experts on this subject based on the ideXlab platform.

  • challenges and practical approaches with word sense disambiguation of Acronyms and abbreviations in the clinical domain
    Healthcare Informatics Research, 2015
    Co-Authors: Sungrim Moon, Bridget T Mcinnes, Genevieve B Melton
    Abstract:

    OBJECTIVES Although Acronyms and abbreviations in clinical text are used widely on a daily basis, relatively little research has focused upon word sense disambiguation (WSD) of Acronyms and abbreviations in the healthcare domain. Since clinical notes have distinctive characteristics, it is unclear whether techniques effective for acronym and abbreviation WSD from biomedical literature are sufficient. METHODS The authors discuss feature selection for automated techniques and challenges with WSD of Acronyms and abbreviations in the clinical domain. RESULTS There are significant challenges associated with the informal nature of clinical text, such as typographical errors and incomplete sentences; difficulty with insufficient clinical resources, such as clinical sense inventories; and obstacles with privacy and security for conducting research with clinical text. Although we anticipated that using sophisticated techniques, such as biomedical terminologies, semantic types, part-of-speech, and language modeling, would be needed for feature selection with automated machine learning approaches, we found instead that simple techniques, such as bag-of-words, were quite effective in many cases. Factors, such as majority sense prevalence and the degree of separateness between sense meanings, were also important considerations. CONCLUSIONS The first lesson is that a comprehensive understanding of the unique characteristics of clinical text is important for automatic acronym and abbreviation WSD. The second lesson learned is that investigators may find that using simple approaches is an effective starting point for these tasks. Finally, similar to other WSD tasks, an understanding of baseline majority sense rates and separateness between senses is important. Further studies and practical solutions are needed to better address these issues.

  • a sense inventory for clinical abbreviations and Acronyms created using clinical notes and medical dictionary resources
    Journal of the American Medical Informatics Association, 2014
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Nathan Liu, James Owen Ryan, Genevieve B Melton
    Abstract:

    Objective To create a sense inventory of abbreviations and Acronyms from clinical texts. Methods The most frequently occurring abbreviations and Acronyms from 352 267 dictated clinical notes were used to create a clinical sense inventory. Senses of each abbreviation and acronym were manually annotated from 500 random instances and lexically matched with long forms within the Unified Medical Language System (UMLS V.2011AB), Another Database of Abbreviations in Medline (ADAM), and Stedman's Dictionary, Medical Abbreviations, Acronyms & Symbols, 4th edition ( Stedman's ). Redundant long forms were merged after they were lexically normalized using Lexical Variant Generation (LVG). Results The clinical sense inventory was found to have skewed sense distributions, practice-specific senses, and incorrect uses. Of 440 abbreviations and Acronyms analyzed in this study, 949 long forms were identified in clinical notes. This set was mapped to 17 359, 5233, and 4879 long forms in UMLS, ADAM, and Stedman's , respectively. After merging long forms, only 2.3% matched across all medical resources. The UMLS, ADAM, and Stedman's covered 5.7%, 8.4%, and 11% of the merged clinical long forms, respectively. The sense inventory of clinical abbreviations and Acronyms and anonymized datasets generated from this study are available for public use at (‘Sense Inventories’, website). Conclusions Clinical sense inventories of abbreviations and Acronyms created using clinical notes and medical dictionary resources demonstrate challenges with term coverage and resource integration. Further work is needed to help with standardizing abbreviations and Acronyms in clinical care and biomedicine to facilitate automated processes such as text-mining and information extraction.

  • automated disambiguation of Acronyms and abbreviations in clinical texts window and training size considerations
    American Medical Informatics Association Annual Symposium, 2012
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Genevieve B Melton
    Abstract:

    Acronyms and abbreviations within electronic clinical texts are widespread and often associated with multiple senses. Automated acronym sense disambiguation (WSD), a task of assigning the context-appropriate sense to ambiguous clinical Acronyms and abbreviations, represents an active problem for medical natural language processing (NLP) systems. In this paper, fifty clinical Acronyms and abbreviations with 500 samples each were studied using supervised machine-learning techniques (Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT)) to optimize the window size and orientation and determine the minimum training sample size needed for optimal performance. Our analysis of window size and orientation showed best performance using a larger left-sided and smaller right-sided window. To achieve an accuracy of over 90%, the minimum required training sample size was approximately 125 samples for SVM classifiers with inverted cross-validation. These findings support future work in clinical acronym and abbreviation WSD and require validation with other clinical texts.

Sungrim Moon - One of the best experts on this subject based on the ideXlab platform.

  • challenges and practical approaches with word sense disambiguation of Acronyms and abbreviations in the clinical domain
    Healthcare Informatics Research, 2015
    Co-Authors: Sungrim Moon, Bridget T Mcinnes, Genevieve B Melton
    Abstract:

    OBJECTIVES Although Acronyms and abbreviations in clinical text are used widely on a daily basis, relatively little research has focused upon word sense disambiguation (WSD) of Acronyms and abbreviations in the healthcare domain. Since clinical notes have distinctive characteristics, it is unclear whether techniques effective for acronym and abbreviation WSD from biomedical literature are sufficient. METHODS The authors discuss feature selection for automated techniques and challenges with WSD of Acronyms and abbreviations in the clinical domain. RESULTS There are significant challenges associated with the informal nature of clinical text, such as typographical errors and incomplete sentences; difficulty with insufficient clinical resources, such as clinical sense inventories; and obstacles with privacy and security for conducting research with clinical text. Although we anticipated that using sophisticated techniques, such as biomedical terminologies, semantic types, part-of-speech, and language modeling, would be needed for feature selection with automated machine learning approaches, we found instead that simple techniques, such as bag-of-words, were quite effective in many cases. Factors, such as majority sense prevalence and the degree of separateness between sense meanings, were also important considerations. CONCLUSIONS The first lesson is that a comprehensive understanding of the unique characteristics of clinical text is important for automatic acronym and abbreviation WSD. The second lesson learned is that investigators may find that using simple approaches is an effective starting point for these tasks. Finally, similar to other WSD tasks, an understanding of baseline majority sense rates and separateness between senses is important. Further studies and practical solutions are needed to better address these issues.

  • a sense inventory for clinical abbreviations and Acronyms created using clinical notes and medical dictionary resources
    Journal of the American Medical Informatics Association, 2014
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Nathan Liu, James Owen Ryan, Genevieve B Melton
    Abstract:

    Objective To create a sense inventory of abbreviations and Acronyms from clinical texts. Methods The most frequently occurring abbreviations and Acronyms from 352 267 dictated clinical notes were used to create a clinical sense inventory. Senses of each abbreviation and acronym were manually annotated from 500 random instances and lexically matched with long forms within the Unified Medical Language System (UMLS V.2011AB), Another Database of Abbreviations in Medline (ADAM), and Stedman's Dictionary, Medical Abbreviations, Acronyms & Symbols, 4th edition ( Stedman's ). Redundant long forms were merged after they were lexically normalized using Lexical Variant Generation (LVG). Results The clinical sense inventory was found to have skewed sense distributions, practice-specific senses, and incorrect uses. Of 440 abbreviations and Acronyms analyzed in this study, 949 long forms were identified in clinical notes. This set was mapped to 17 359, 5233, and 4879 long forms in UMLS, ADAM, and Stedman's , respectively. After merging long forms, only 2.3% matched across all medical resources. The UMLS, ADAM, and Stedman's covered 5.7%, 8.4%, and 11% of the merged clinical long forms, respectively. The sense inventory of clinical abbreviations and Acronyms and anonymized datasets generated from this study are available for public use at (‘Sense Inventories’, website). Conclusions Clinical sense inventories of abbreviations and Acronyms created using clinical notes and medical dictionary resources demonstrate challenges with term coverage and resource integration. Further work is needed to help with standardizing abbreviations and Acronyms in clinical care and biomedicine to facilitate automated processes such as text-mining and information extraction.

  • automated disambiguation of Acronyms and abbreviations in clinical texts window and training size considerations
    American Medical Informatics Association Annual Symposium, 2012
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Genevieve B Melton
    Abstract:

    Acronyms and abbreviations within electronic clinical texts are widespread and often associated with multiple senses. Automated acronym sense disambiguation (WSD), a task of assigning the context-appropriate sense to ambiguous clinical Acronyms and abbreviations, represents an active problem for medical natural language processing (NLP) systems. In this paper, fifty clinical Acronyms and abbreviations with 500 samples each were studied using supervised machine-learning techniques (Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT)) to optimize the window size and orientation and determine the minimum training sample size needed for optimal performance. Our analysis of window size and orientation showed best performance using a larger left-sided and smaller right-sided window. To achieve an accuracy of over 90%, the minimum required training sample size was approximately 125 samples for SVM classifiers with inverted cross-validation. These findings support future work in clinical acronym and abbreviation WSD and require validation with other clinical texts.

Serguei V S Pakhomov - One of the best experts on this subject based on the ideXlab platform.

  • a sense inventory for clinical abbreviations and Acronyms created using clinical notes and medical dictionary resources
    Journal of the American Medical Informatics Association, 2014
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Nathan Liu, James Owen Ryan, Genevieve B Melton
    Abstract:

    Objective To create a sense inventory of abbreviations and Acronyms from clinical texts. Methods The most frequently occurring abbreviations and Acronyms from 352 267 dictated clinical notes were used to create a clinical sense inventory. Senses of each abbreviation and acronym were manually annotated from 500 random instances and lexically matched with long forms within the Unified Medical Language System (UMLS V.2011AB), Another Database of Abbreviations in Medline (ADAM), and Stedman's Dictionary, Medical Abbreviations, Acronyms & Symbols, 4th edition ( Stedman's ). Redundant long forms were merged after they were lexically normalized using Lexical Variant Generation (LVG). Results The clinical sense inventory was found to have skewed sense distributions, practice-specific senses, and incorrect uses. Of 440 abbreviations and Acronyms analyzed in this study, 949 long forms were identified in clinical notes. This set was mapped to 17 359, 5233, and 4879 long forms in UMLS, ADAM, and Stedman's , respectively. After merging long forms, only 2.3% matched across all medical resources. The UMLS, ADAM, and Stedman's covered 5.7%, 8.4%, and 11% of the merged clinical long forms, respectively. The sense inventory of clinical abbreviations and Acronyms and anonymized datasets generated from this study are available for public use at (‘Sense Inventories’, website). Conclusions Clinical sense inventories of abbreviations and Acronyms created using clinical notes and medical dictionary resources demonstrate challenges with term coverage and resource integration. Further work is needed to help with standardizing abbreviations and Acronyms in clinical care and biomedicine to facilitate automated processes such as text-mining and information extraction.

  • automated disambiguation of Acronyms and abbreviations in clinical texts window and training size considerations
    American Medical Informatics Association Annual Symposium, 2012
    Co-Authors: Sungrim Moon, Serguei V S Pakhomov, Genevieve B Melton
    Abstract:

    Acronyms and abbreviations within electronic clinical texts are widespread and often associated with multiple senses. Automated acronym sense disambiguation (WSD), a task of assigning the context-appropriate sense to ambiguous clinical Acronyms and abbreviations, represents an active problem for medical natural language processing (NLP) systems. In this paper, fifty clinical Acronyms and abbreviations with 500 samples each were studied using supervised machine-learning techniques (Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT)) to optimize the window size and orientation and determine the minimum training sample size needed for optimal performance. Our analysis of window size and orientation showed best performance using a larger left-sided and smaller right-sided window. To achieve an accuracy of over 90%, the minimum required training sample size was approximately 125 samples for SVM classifiers with inverted cross-validation. These findings support future work in clinical acronym and abbreviation WSD and require validation with other clinical texts.

  • abbreviation and acronym disambiguation in clinical discourse
    American Medical Informatics Association Annual Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.

  • AMIA - Abbreviation and acronym disambiguation in clinical discourse.
    AMIA ... Annual Symposium proceedings. AMIA Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.

  • semi supervised maximum entropy based approach to acronym and abbreviation normalization in medical texts
    Meeting of the Association for Computational Linguistics, 2002
    Co-Authors: Serguei V S Pakhomov
    Abstract:

    Text normalization is an important aspect of successful information retrieval from medical documents such as clinical notes, radiology reports and discharge summaries. In the medical domain, a significant part of the general problem of text normalization is abbreviation and acronym disambiguation. Numerous abbreviations are used routinely throughout such texts and knowing their meaning is critical to data retrieval from the document. In this paper I will demonstrate a method of automatically generating training data for Maximum Entropy (ME) modeling of abbreviations and Acronyms and will show that using ME modeling is a promising technique for abbreviation and acronym normalization. I report on the results of an experiment involving training a number of ME models used to normalize abbreviations and Acronyms on a sample of 10,000 rheumatology notes with ~89% accuracy.

Tsung O. Cheng - One of the best experts on this subject based on the ideXlab platform.

  • The use of coercive trial Acronyms should be discouraged.
    International journal of cardiology, 2012
    Co-Authors: Tsung O. Cheng
    Abstract:

    Physicians, especially cardiologists, like to use or invent Acronyms [1–8]. All the current medical journals, especially cardiological journals, continue to be filled with Acronyms. New Acronyms are being invented every day, especially for cardiological trials. The use of Acronyms is sometimes necessary to simplify and facilitate modern communication in our highly technical world, especially to avoid repetition of long, unwieldy, breath-catching and space-occupying trial names in a scientific publication [8]. A trial acronym is particularly advantageous to the participating investigators of the trial who, bymerelymentioning the acronym, can be instantly referred to the appropriate staff to answer any questionswhen they call a trial center to register a potential patient [9]. Cardiologists are most imaginative in creating Acronyms for clinical trials, some of which may be intended to be prophetic, as in HOPE and CONSENSUS [8]. Such Acronyms not only tend to attract more patients for the trial but also make the trial more likely to be funded by many a granting agency, especially the pharmaceutical industry [10]. Unfortunately, they were not more likely to report positive results [10]. There are many studies with positive-sounding Acronyms that turn out to yield negative results [11–13]. Examples are ALIVE, ATLAS, BEAUTIFUL, CARDINAL, CHAMPION, CRESCENDO, DEFINITE, HALT-MI, HF-ACTION, IMPROVED, I-PRESERVE, LIMIT AMI, MASTER, MIRACLE, OPTIMIST, PROMISE, PROVE IT, SAVED, SUCCESS, and SWORD (for definitions of these trial Acronyms, please consult Refs. 8,11–14). The latest entries of a positive-sounding trial acronym that turned out to be a negative study are ONTARGET [15], ACCORD [16] and AIM-HIGH [17]. Then there are nationalistic Acronyms of clinical trials that tend or intend to call attention to the countries or cities of origin of the trials.

  • CALORIE is a better acronym than CALERIE.
    International journal of cardiology, 2006
    Co-Authors: Tsung O. Cheng
    Abstract:

    The investigators in the recently published article on the effects of calorie restriction in overweight individuals coined the acronym CALERIE for the Comprehensive Assessment of the Long Term Effects of Reducing Intake of Energy [1]. But I think that it would be far better to give the study the acronym CALORIE for Comprehensive Assessment of the Long term effects Of Reducing Intake of Energy. After all, the study is all about calories. Acronyms of clinical trials are often very imaginative, intuitive and sometimes even coercive or prophetic [2]. In the latter case, the investigators attempted to draw special attention of the readers to the purpose and/or results of the trials, e.g., ACTION (Anticoagulation Consortium To Improve Outcomes Nationally), HOPE (Heart Outcomes Prevention Evaluation), RAPID (Rapid Anticoagulation Preventing Ischemic Damage) and WISH (Women Into Staying Healthy). The readers are advised to refer to Ref. [2] for a complete listing of the Acronyms of cardiological trials and their definitions. On the other hand, there are many positive-sounding Acronyms of trials in cardiology that turned out to yield negative results, such as ALIVE, ATLAS, CARDINAL, DEFINITE, HALT-MI, IMPROVED, LIMIT AMI, MIRA-

  • clinical trial registration should also include trial Acronyms
    International Journal of Cardiology, 2006
    Co-Authors: Tsung O. Cheng
    Abstract:

    Recently, WHO [1] proposed and the International Committee of Medical Journal Editors [2] stated that all clinical trials should be registered. Unfortunately, neither listed registration of the Acronyms of the trials as a requirement. Acronyms of clinical trials are flooding the medical literature [3]. They are used by not only investigators and physicians but also patients and press. Many Acronyms are shared by multiple, unrelated trials, e.g., HEART, by 17 different trials; IMPACT, by 13 different trials; STOP, by 11 different trials; SMART and START, each by 10 different trials; BEST, CARE and PACT, each by 9 different trials; and PRIME, by 8 different trials [3,4]. The problem of multiple trials, often unrelated, sharing the same acronym is serious and will get worse. Therefore, maintaining a trial

  • Acronymesis: the exploding misuse of Acronyms.
    Texas Heart Institute journal, 2003
    Co-Authors: Herbert L. Fred, Tsung O. Cheng
    Abstract:

    Acronyms are a product of the 20th century and have become an everyday part of the English language. Well-known examples are LASER (Light Amplification by Stimulated Emission of Radiation), RADAR (RAdio Detecting And Ranging), AWOL (Absent WithOut Leave), and SCUBA (Self-Contained Underwater Breathing Apparatus). Anyone who reads current scientific journals also knows that Acronyms are flooding the medical literature, especially the literature on clinical research trials in cardiology. Acronyms of major cardiologic trials have increased exponentially during the past decade—from around 250 in 1992 1 to nearly 4,200* in 2002. 2 The coined words “acronymania” 3 and “acronymophilia” 3–5 underscore this rapid growth but fail to emphasize the downside. In fact, improper use of Acronyms has become a nemesis. Hence, our term “acronymesis.” We are not saying that all Acronyms are “evil.” On the contrary. Acronyms can simplify and facilitate communication, enhance recall, and save time, space, and effort for everyone involved. This is particularly true for the many clinical research trials with long, unwieldy names that are cumbersome to recite and difficult to remember. Could anyone deny that BIG-MAC is a lot more “palatable” than Beaumont Interventional Group—Mevacor, ACE inhibitor, Colchicine restenosis trial? But what if BIG-MAC were not defined? Failure to define Acronyms is all too frequent and reflects inconsiderate writing, careless editing, and irresponsible publishing. 2 As an example, the following sentence appeared in the abstract supplement of a major cardiology journal: “The study population comprised 2,950 patients (3,549 lesions) prospectively enroled [sic] into 4 restenosis trials (MERCATOR, MARCATOR, CARPORT, PARK).” 6 Nowhere were these Acronyms defined, leaving readers to wonder whether they were looking at a used-car advertisement. Moreover, 70 other abstracts in that same issue contained undefined Acronyms. And in one recent review article alone, we counted over 90 undefined Acronyms, including some that appeared more than once! 7 What is obvious to members of one specialty may be obscure to members of another specialty and to those still in training. To complicate the matter, multiple trials—often unrelated—may share the same acronym. HEART, for example, currently represents no fewer than 16 different studies in which the acronym stands for different words in different combinations (Table I). In situations like this, defining the acronym when first mentioned is clearly essential. TABLE I. Sixteen Different Clinical Trials with the Acronym HEART Worse, when the first phase of a study evolves into the second phase, the acronym may remain the same, but the words represented by the acronym change. In CONSENSUS, which represents COoperative North Scandinavian ENalapril SUrvival Study, the first N stands for North, but in CONSENSUS II, it stands for New. Similarly, in VALID (Velocity Assessment for Lesions of IntermeDiate severity), the letters I and D stand for IntermeDiate, but in VALID II, they stand for InDeterminate. And in PLAC, which refers to Pravastatin Limitation of Atherosclerosis in the Coronary arteries, the C stands for Coronary, but in PLAC-2, the C stands for Carotid and the L for Lipids. 2,8 An acronym may also assume an entirely different meaning when it becomes part of another acronym. For example, STOP refers to Shunt Thrombotic Occlusion prevention by Picotamide. But STOP-IT refers to Sites Testing Osteoporosis Prevention - Intervention Treatment. 2 Rarely questioned is the potentially coercive nature of certain Acronyms. 9 CURE, HELP, HOPE, MIRACLE, and SAVE may entice research subjects by subliminally or outwardly promising something that the trial might not ultimately deliver. As a consequence, both the subjects and the investigators may become favorably biased. And because the results of clinical trials are preeminent in determining therapeutic priorities, 10 practitioners might prescribe medications on the simple basis of the acronym's connotation. Trials with a positive-sounding acronym but with negative results include ATLAS, LIMIT AMI, IMPROVED, and PROMISE. The PROMISE trial, in fact, was not promising at all and had to be terminated prematurely. 11 Therefore, institutional review boards, sponsors of trials, and the researchers conducting the trials should discourage or prohibit the use of coercive Acronyms. 12 Certain Acronyms may taint MEDLINE searches. 13,14 Trial names such as PACT, SMART, and START are hard to find in databases, because the words themselves are so common. Consequently, unless you know what the acronym stands for, you're likely to retrieve a ton of irrelevant information. Even worse are the trials with names known as “stop” words—words so common that MEDLINE considers them to be useless and not searchable. Examples are ITS, THIS, THAT, and WHAT. If the author does not define the acronym in the title, abstract, or text of the article, a text word search will be impossible. Disturbing, too, is the practice of building Acronyms not from the first or first 2 letters of the words referred to, but from the third, fourth, or even last letters. 2,15 Witness RENAISSANCE (Randomized Etanercept North AmerIcan Strategy to Study An-tago-Nism of CytokinEs), RENEWAL (Randomized EtaNErcept Worldwide evALuation), and ACCESS (A Comparison of perCutaneous Entry SiteS for coronary angioplasty). But it gets even worse. We have Acronyms made of Acronyms. 4 AIMS, for example, refers to APSAC International Mortality Study; APSAC, in turn, means Anisoylated Plasminogen Streptokinase Activator Complex. And TAPS refers to TPA APSAC Patency Study; TPA, in turn, means Tissue Plasminogen Activator. We also have Acronyms that contain an extra consonant or vowel that does not appear in the title of the trial but is inserted to make the acronym sound better, 8 e.g., E in BASE (Berlin Aging Study) and U in PUTS (Perindopril Therapeutic Safety study). Likewise, words are often excluded from Acronyms to make pronunciation easier. Examples are GUSTO for Global Utilization (of) Streptokinase (and) TPA (for) Occluded (arteries), and EPIC for Evaluation (of IIb/IIIa) Platelet (receptor antagonist 7E3 in preventing) Ischemic Complications. Some Acronyms are easily confused with each other because they sound alike. 8 Yet they may represent completely different trials, e.g., TOMHS (Treatment Of Mild Hypertension Study) and TOHMS (Trials Of Hypertensive Medications Study). And when all letters of an unexplained acronym are not capitalized in the title, especially when the acronym itself is a common word, the uninitiated reader will not necessarily know which is the acronym and which is the common word, e.g., ACUTE vs Acute, COURAGE vs Courage, EPIC vs Epic, EPILOG vs Epilog, LIFE vs Life, and MIRACLE vs Miracle. 2 Finally, inventing Acronyms for cardiologic research trials has become a game of “one-upmanship.” The goal seems to be finding an acronym that is cuter or wittier than the previous one. Categorizing them has become popular as well. 16 Thus, we have anatomic terms such as ARMS, BRAINS, CAVA, EARS, FACET, HEART, INTIMA, IRIS, PROSTATE, and RADIUS, or food items such as APRICOT, BIG-MAC, MOCHA, SALAD, SALT, and TOAST (or just zestful eating, e.g., GUSTO). There are geographical locations—MIAMI, NEVADA, PARIS, SIAM, and TIBET; matters of LIFE or DEATH; feminine names—ELSA, ERICA, EVA, GRACE, MONICA, NORA, PAMELA, PHYLLIS, and RITA; masculine names—ADAM, ARCHER, BERT, CAESAR, CHIP, DAVE, DAVID, DONALD, ERNST, HAROLD, ISAAC, MARVIN, and OSCAR; and on and on and on. In fact, we have reached the point where investigators are selecting a colorful acronym, and then dreaming up a suitable study to match it. 15 In conclusion, acronymesis has become a Macho-driven Major Malady of Modern Medical Miscommunication (MMMMMM). Meaningful Management of this MMMMMM Mandates Maximum effort to Minimize acronymic Misuse (MMMMMM). Oops! We just used the same “acronym” for 2 different messages. Does that ring a bell? We can overcome acronymesis if we choose our Acronyms with circumspection, derive them from the first or first 2 letters of each word in the phrase being condensed, define them at first mention, capitalize all of their letters consistently, and refrain from including common MEDLINE search words. 2,17 Given their proven usefulness, Acronyms undoubtedly are here to stay. But because of their aforementioned drawbacks, we have 2 suggestions: Editors need to Concentrate On Nixing This Rarely Obvious Lingo (CONTROL), and Authors need to remember that good Communicators Resist Acronymic Proliferation (CRAP).

  • Acronyms of cardiologic trials--2002.
    International journal of cardiology, 2003
    Co-Authors: Tsung O. Cheng, Desmond G. Julian
    Abstract:

    Cardiological trials constitute the basic science of medical therapeutics and population health. They are important and necessary. The more trials there are, the more Acronyms are being created. The problem is not that there are so many trial Acronyms but that often times these Acronyms are unexplained in a scientific communication, either oral or written. To compound the problem is the fact that many identical Acronyms represent entirely different trials. The Uniform Requirements for Manuscripts Submitted to Biomedical Journals published by the International Committee of Medical Journal Editors demand that Acronyms be defined the first time they are used in any article. These requirements need to be reinforced to avoid reader aggravation, confusion and frustration.

Christopher G. Chute - One of the best experts on this subject based on the ideXlab platform.

  • AMIA - Abbreviation and acronym disambiguation in clinical discourse.
    AMIA ... Annual Symposium proceedings. AMIA Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.

  • abbreviation and acronym disambiguation in clinical discourse
    American Medical Informatics Association Annual Symposium, 2005
    Co-Authors: Serguei V S Pakhomov, Ted Pedersen, Christopher G. Chute
    Abstract:

    Use of abbreviations and Acronyms is pervasive in clinical reports despite many efforts to limit the use of ambiguous and unsanctioned abbreviations and Acronyms. Due to the fact that many abbreviations and Acronyms are ambiguous with respect to their sense, complete and accurate text analysis is impossible without identification of the sense that was intended for a given abbreviation or acronym. We present the results of an experiment where we used the contexts harvested from the Internet through Google API to collect contextual data for a set of 8 Acronyms found in clinical notes at the Mayo Clinic. We then used the contexts to disambiguate the sense of abbreviations in a manually annotated corpus.