Sense Definition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 180 Experts worldwide ranked by ideXlab platform

E. Eleftheriou - One of the best experts on this subject based on the ideXlab platform.

  • On the computation of the minimum distance of low-density parity-check codes
    2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577), 2004
    Co-Authors: Xiao-yu Hu, M.p.c. Fossorier, E. Eleftheriou
    Abstract:

    Low-density parity-check (LDPC) codes in their broader-Sense Definition are linear codes whose parity-check matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NP-hard problem. We propose a randomized algorithm called nearest nonzero codeword search (NNCS) approach to tackle this problem for iteratively decodable LDPC codes. The principle of the NNCS approach is to search codewords locally around the all-zero codeword perturbed by minimal noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum-Hamming- weight codeword whose Hamming weight is equal to the minimum distance of the linear code. This approach has its roots in Berrou et al.'s error-impulse method and a form of Fossorier's list decoding for LDPC codes.

  • ICC - On the computation of the minimum distance of low-density parity-check codes
    2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577), 2004
    Co-Authors: Xiao-yu Hu, M.p.c. Fossorier, E. Eleftheriou
    Abstract:

    Low-density parity-check (LDPC) codes in their broader-Sense Definition are linear codes whose parity-check matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NP-hard problem. We propose a randomized algorithm called nearest nonzero codeword search (NNCS) approach to tackle this problem for iteratively decodable LDPC codes. The principle of the NNCS approach is to search codewords locally around the all-zero codeword perturbed by minimal noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum-Hamming- weight codeword whose Hamming weight is equal to the minimum distance of the linear code. This approach has its roots in Berrou et al.'s error-impulse method and a form of Fossorier's list decoding for LDPC codes.

Tanveer J. Siddiqui - One of the best experts on this subject based on the ideXlab platform.

  • Evaluating effect of context window size, stemming and stop word removal on Hindi word Sense disambiguation
    2012 International Conference on Information Retrieval & Knowledge Management, 2012
    Co-Authors: Satyendr Singh, Tanveer J. Siddiqui
    Abstract:

    This paper investigates the effects of stemming, stop word removal and size of context window on Hindi word Sense disambiguation. The evaluation has been made on a manually created Sense tagged corpus consisting of Hindi words (nouns). The Sense Definition has been obtained from Hindi WordNet, which is an important lexical resource for Hindi language developed at IIT Bombay. The maximum observed precision of 54.81% on 1248 test instances corresponds to the case when both stemming and stop words elimination has been performed. The % improvement in precision and recall is 9.24% and 12.68% over the baseline performance.

  • CAMP - Evaluating effect of context window size, stemming and stop word removal on Hindi word Sense disambiguation
    2012 International Conference on Information Retrieval & Knowledge Management, 2012
    Co-Authors: Satyendr Singh, Tanveer J. Siddiqui
    Abstract:

    This paper investigates the effects of stemming, stop word removal and size of context window on Hindi word Sense disambiguation. The evaluation has been made on a manually created Sense tagged corpus consisting of Hindi words (nouns). The Sense Definition has been obtained from Hindi WordNet, which is an important lexical resource for Hindi language developed at IIT Bombay. The maximum observed precision of 54.81% on 1248 test instances corresponds to the case when both stemming and stop words elimination has been performed. The % improvement in precision and recall is 9.24% and 12.68% over the baseline performance.

Xiao-yu Hu - One of the best experts on this subject based on the ideXlab platform.

  • On the computation of the minimum distance of low-density parity-check codes
    2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577), 2004
    Co-Authors: Xiao-yu Hu, M.p.c. Fossorier, E. Eleftheriou
    Abstract:

    Low-density parity-check (LDPC) codes in their broader-Sense Definition are linear codes whose parity-check matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NP-hard problem. We propose a randomized algorithm called nearest nonzero codeword search (NNCS) approach to tackle this problem for iteratively decodable LDPC codes. The principle of the NNCS approach is to search codewords locally around the all-zero codeword perturbed by minimal noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum-Hamming- weight codeword whose Hamming weight is equal to the minimum distance of the linear code. This approach has its roots in Berrou et al.'s error-impulse method and a form of Fossorier's list decoding for LDPC codes.

  • ICC - On the computation of the minimum distance of low-density parity-check codes
    2004 IEEE International Conference on Communications (IEEE Cat. No.04CH37577), 2004
    Co-Authors: Xiao-yu Hu, M.p.c. Fossorier, E. Eleftheriou
    Abstract:

    Low-density parity-check (LDPC) codes in their broader-Sense Definition are linear codes whose parity-check matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NP-hard problem. We propose a randomized algorithm called nearest nonzero codeword search (NNCS) approach to tackle this problem for iteratively decodable LDPC codes. The principle of the NNCS approach is to search codewords locally around the all-zero codeword perturbed by minimal noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum-Hamming- weight codeword whose Hamming weight is equal to the minimum distance of the linear code. This approach has its roots in Berrou et al.'s error-impulse method and a form of Fossorier's list decoding for LDPC codes.

Jason S Chang - One of the best experts on this subject based on the ideXlab platform.

  • class based Sense Definition model for word Sense tagging and disambiguation
    Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, 2003
    Co-Authors: Jason S Chang
    Abstract:

    We present an unsupervised learning strategy for word Sense disambiguation (WSD) that exploits multiple linguistic resources including a parallel corpus, a bilingual machine readable dictionary, and a thesaurus. The approach is based on Class Based Sense Definition Model (CBSDM) that generates the glosses and translations for a class of word Senses. The model can be applied to resolve Sense ambiguity for words in a parallel corpus. That Sense tagging procedure, in effect, produces a semantic bilingual concordance, which can be used to train WSD systems for the two languages involved. Experimental results show that CBSDM trained on Longman Dictionary of Contemporary English, English-Chinese Edition (LDOCE E-C) and Longman Lexicon of Contemporary English (LLOCE) is very effectively in turning a Chinese-English parallel corpus into Sense tagged data for development of WSD systems.

  • SIGHAN - Class Based Sense Definition Model for Word Sense Tagging and Disambiguation
    Proceedings of the second SIGHAN workshop on Chinese language processing -, 2003
    Co-Authors: Jason S Chang
    Abstract:

    We present an unsupervised learning strategy for word Sense disambiguation (WSD) that exploits multiple linguistic resources including a parallel corpus, a bilingual machine readable dictionary, and a thesaurus. The approach is based on Class Based Sense Definition Model (CBSDM) that generates the glosses and translations for a class of word Senses. The model can be applied to resolve Sense ambiguity for words in a parallel corpus. That Sense tagging procedure, in effect, produces a semantic bilingual concordance, which can be used to train WSD systems for the two languages involved. Experimental results show that CBSDM trained on Longman Dictionary of Contemporary English, English-Chinese Edition (LDOCE E-C) and Longman Lexicon of Contemporary English (LLOCE) is very effectively in turning a Chinese-English parallel corpus into Sense tagged data for development of WSD systems.

Satyendr Singh - One of the best experts on this subject based on the ideXlab platform.

  • Evaluating effect of context window size, stemming and stop word removal on Hindi word Sense disambiguation
    2012 International Conference on Information Retrieval & Knowledge Management, 2012
    Co-Authors: Satyendr Singh, Tanveer J. Siddiqui
    Abstract:

    This paper investigates the effects of stemming, stop word removal and size of context window on Hindi word Sense disambiguation. The evaluation has been made on a manually created Sense tagged corpus consisting of Hindi words (nouns). The Sense Definition has been obtained from Hindi WordNet, which is an important lexical resource for Hindi language developed at IIT Bombay. The maximum observed precision of 54.81% on 1248 test instances corresponds to the case when both stemming and stop words elimination has been performed. The % improvement in precision and recall is 9.24% and 12.68% over the baseline performance.

  • CAMP - Evaluating effect of context window size, stemming and stop word removal on Hindi word Sense disambiguation
    2012 International Conference on Information Retrieval & Knowledge Management, 2012
    Co-Authors: Satyendr Singh, Tanveer J. Siddiqui
    Abstract:

    This paper investigates the effects of stemming, stop word removal and size of context window on Hindi word Sense disambiguation. The evaluation has been made on a manually created Sense tagged corpus consisting of Hindi words (nouns). The Sense Definition has been obtained from Hindi WordNet, which is an important lexical resource for Hindi language developed at IIT Bombay. The maximum observed precision of 54.81% on 1248 test instances corresponds to the case when both stemming and stop words elimination has been performed. The % improvement in precision and recall is 9.24% and 12.68% over the baseline performance.