Chinese Language

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91665 Experts worldwide ranked by ideXlab platform

Erik Cambria - One of the best experts on this subject based on the ideXlab platform.

  • A Review of Sentiment Analysis Research in Chinese Language
    2017
    Co-Authors: Haiyun Peng, Erik Cambria, Amir Hussain
    Abstract:

    Research on sentiment analysis in English Language has undergone major developments in recent years. Chinese sentiment analysis research, however, has not evolved significantly despite the exponential growth of Chinese e-business and e-markets. This review paper aims to study past, present, and future of Chinese sentiment analysis from both monolingual and multilingual perspectives. The constructions of sentiment corpora and lexica are first introduced and summarized. Following, a survey of monolingual sentiment classification in Chinese via three different classification frameworks is conducted. Finally, sentiment classification based on the multilingual approach is introduced. After an overview of the literature, we propose that a more human-like (cognitive) representation of Chinese concepts and their inter-connections could overcome the scarceness of available resources and, hence, improve the state of the art. With the increasing expansion of Chinese Language on the Web, sentiment analysis in Chinese is becoming an increasingly important research field. Concept-level sentiment analysis, in particular, is an exciting yet challenging direction for such research field which holds great promise for the future.

  • csenticnet a concept level resource for sentiment analysis in Chinese Language
    2017
    Co-Authors: Haiyun Peng, Erik Cambria
    Abstract:

    In recent years, sentiment analysis has become a hot topic in natural Language processing. Although sentiment analysis research in English is rather mature, Chinese sentiment analysis has just set sail, as the limited amount of sentiment resources in Chinese severely limits its development. In this paper, we present a method for the construction of a Chinese sentiment resource. We utilize both English sentiment resources and the Chinese knowledge base NTU Multi-lingual Corpus. In particular, we first propose a resource based on SentiWordNet and a second version based on SenticNet.

Christy Lao - One of the best experts on this subject based on the ideXlab platform.

  • parents attitudes toward Chinese english bilingual education and Chinese Language use
    2004
    Co-Authors: Christy Lao
    Abstract:

    Abstract This study surveyed 86 parents who enrolled their children in a Chinese–English bilingual preschool in San Francisco. The participants were asked their opinions on bilingual education, the reasons for sending their children to a Chinese–English bilingual school, their attitudes toward bilingual education, their use of Chinese and English, and their expectations for their children and the Language environment at home. It was found that parents strongly support Chinese–English bilingual education and understood the purpose and underlying principles of bilingual education. Although there were some differences between the English-dominant and Chinese-dominant parents' responses, the major reasons parents enrolled their children in Chinese–English bilingual school were the practical advantages of being bilingual (e.g., better career opportunities), positive effects on self-image, and development of skills enabling effective communication within the Chinese-speaking community. The majority of the paren...

Haiyun Peng - One of the best experts on this subject based on the ideXlab platform.

  • A Review of Sentiment Analysis Research in Chinese Language
    2017
    Co-Authors: Haiyun Peng, Erik Cambria, Amir Hussain
    Abstract:

    Research on sentiment analysis in English Language has undergone major developments in recent years. Chinese sentiment analysis research, however, has not evolved significantly despite the exponential growth of Chinese e-business and e-markets. This review paper aims to study past, present, and future of Chinese sentiment analysis from both monolingual and multilingual perspectives. The constructions of sentiment corpora and lexica are first introduced and summarized. Following, a survey of monolingual sentiment classification in Chinese via three different classification frameworks is conducted. Finally, sentiment classification based on the multilingual approach is introduced. After an overview of the literature, we propose that a more human-like (cognitive) representation of Chinese concepts and their inter-connections could overcome the scarceness of available resources and, hence, improve the state of the art. With the increasing expansion of Chinese Language on the Web, sentiment analysis in Chinese is becoming an increasingly important research field. Concept-level sentiment analysis, in particular, is an exciting yet challenging direction for such research field which holds great promise for the future.

  • csenticnet a concept level resource for sentiment analysis in Chinese Language
    2017
    Co-Authors: Haiyun Peng, Erik Cambria
    Abstract:

    In recent years, sentiment analysis has become a hot topic in natural Language processing. Although sentiment analysis research in English is rather mature, Chinese sentiment analysis has just set sail, as the limited amount of sentiment resources in Chinese severely limits its development. In this paper, we present a method for the construction of a Chinese sentiment resource. We utilize both English sentiment resources and the Chinese knowledge base NTU Multi-lingual Corpus. In particular, we first propose a resource based on SentiWordNet and a second version based on SenticNet.

Wenyuze Li - One of the best experts on this subject based on the ideXlab platform.

  • Chinese-Language articles are not biased in citations: Evidences from Chinese-English bilingual journals in Scopus and Web of Science
    2014
    Co-Authors: Jiang Li, Lili Qiao, Wenyuze Li
    Abstract:

    This paper examined the citation impact of Chinese- and English-Language articles in Chinese-English bilingual journals indexed by Scopus and Web of Science (WoS). Two findings were obtained from comparative analysis: (1) Chinese-Language articles were not biased in citations compared with English-Language articles, since they received a large number of citations from Chinese scientists; (2) a Chinese-Language community was found in Scopus, in which Chinese-Language articles mainly received citations from Chinese-Language articles, but it was not found in WoS whose coverage of Chinese-Language articles is only one-tenth of Scopus. The findings suggest some implications for academic evaluation of journals including Chinese-Language articles in Scopus and WoS.

Xuanwei Zhang - One of the best experts on this subject based on the ideXlab platform.

  • light pre trained Chinese Language model for nlp tasks
    2020
    Co-Authors: Xuanwei Zhang
    Abstract:

    We present the results of shared-task 1 held in the 2020 Conference on Natural Language Processing and Chinese Computing (NLPCC): Light Pre-Trained Chinese Language Model for NLP tasks. This shared-task examines the performance of light Language models on four common NLP tasks: Text Classification, Named Entity Recognition, Anaphora Resolution and Machine Reading Comprehension. To make sure that the models are light-weight, we put restrictions and requirements on the number of parameters and inference speed of the participating models. In total, 30 teams registered our tasks. Each submission was evaluated through our online benchmark system (https://www.cluebenchmarks.com/nlpcc2020.html), with the average score over the four tasks as the final score. Various ideas and frameworks were explored by the participants, including data enhancement, knowledge distillation and quantization. The best model achieved an average score of 75.949, which was very close to BERT-base (76.460). We believe this shared-task highlights the potential of light-weight models and calls for further research on the development and exploration of light-weight models.