Web Server Software

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 27456 Experts worldwide ranked by ideXlab platform

Karim R. Lakhani - One of the best experts on this subject based on the ideXlab platform.

  • How open source Software works: “free” user-to-user assistance
    Research Policy, 2003
    Co-Authors: Karim R. Lakhani, Eric Von Hippel
    Abstract:

    Research into free and open source Software development projects has so far largely focused on how the major tasks of Software development are organized and motivated. But a complete project requires the execution of “mundane but necessary” tasks as well. In this paper, we explore how the mundane but necessary task of field support is organized in the case of Apache Web Server Software, and why some project participants are motivated to provide this service gratis to others. We find that the Apache field support system functions effectively. We also find that, when we partition the help system into its component tasks, 98% of the effort expended by information providers in fact returns direct learning benefits to those providers. This finding considerably reduces the puzzle of why information providers are willing to perform this task “for free.” Implications are discussed. © 2002 Elsevier Science B.V. All rights reserved.

  • Sustaining the virtual commons : end user support for Apache Web Server Software on the Usenet
    1999
    Co-Authors: Karim R. Lakhani
    Abstract:

    Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Technology and Policy Program, 1999.

P. Vandal - One of the best experts on this subject based on the ideXlab platform.

  • PRDC - Performance and Reliability Analysis ofWeb Server Software Architectures
    2006 12th Pacific Rim International Symposium on Dependable Computing (PRDC'06), 2006
    Co-Authors: Swapna S. Gokhale, P. Vandal
    Abstract:

    Our increasing reliance on the information and services provided by modern Web Servers mandates that these services be offered with superior performance and reliability. The architecture of a Web Server has a profound impact on its performance and reliability. One of the dimensions used to characterize the architecture of a Web Server is the processing model employed in the Server, which describes the type of process or threading model used to support a Web Server operation. The main options for a processing model are process-based, thread-based or a hybrid of the process-based and the thread-based models. These options have unique advantages and disadvantages in terms of their performance and reliability tradeoffs. In this paper we propose an analysis methodology based on the stochastic reward net (SRN) modeling paradigm to quantify the performance and the reliability tradeoffs in the process-based and the thread-based Web Server Software architectures. We demonstrate the capability of the methodology to facilitate systematic, quantitative tradeoffs using several examples

Idit Keidar - One of the best experts on this subject based on the ideXlab platform.

  • Do not crawl in the DUST: Different URLs with similar text
    ACM Transactions on the Web, 2009
    Co-Authors: Ziv Bar-yossef, Idit Keidar, Uri Schonfeld
    Abstract:

    We consider the problem of DUST: Different URLs with Similar Text. Such duplicate URLs are prevalent in Web sites, as Web Server Software often uses aliases and redirections, and dynamically generates the same page from various different URL requests. We present a novel algorithm, DustBuster, for uncovering DUST; that is, for discovering rules that transform a given URL to others that are likely to have similar content. DustBuster mines DUST effectively from previous crawl logs or Web Server logs, without/examining page contents. Verifying these rules via sampling requires fetching few actual Web pages. Search engines can benefit from information about DUST to increase the effectiveness of crawling, reduce indexing overhead, and improve the quality of popularity statistics such as PageRank.

  • WWW - Do not crawl in the DUST: different URLs with similar text
    Proceedings of the 15th international conference on World Wide Web - WWW '06, 2006
    Co-Authors: Uri Schonfeld, Ziv Bar-yossef, Idit Keidar
    Abstract:

    We consider the problem of dust : Different URLs with Similar Text. Such duplicate URLs are prevalent in Web sites, as Web Server Software often uses aliases and redirections, translates URLs to some canonical form, and dynamically generates the same page from various different URL requests. We present a novel algorithm, DustBuster, for uncovering dust ; that is, for discovering rules for transforming a given URL to others that are likely to have similar content. DustBuster is able to detect dust effectively from previous crawl logs or Web Server logs, without examining page contents. Verifying these rules via sampling requires fetching few actual Web pages. Search engines can benefit from this information to increase the effectiveness of crawling, reduce indexing overhead as well as improve the quality of popularity statistics such as PageRank.

Swapna S. Gokhale - One of the best experts on this subject based on the ideXlab platform.

  • PRDC - Performance and Reliability Analysis ofWeb Server Software Architectures
    2006 12th Pacific Rim International Symposium on Dependable Computing (PRDC'06), 2006
    Co-Authors: Swapna S. Gokhale, P. Vandal
    Abstract:

    Our increasing reliance on the information and services provided by modern Web Servers mandates that these services be offered with superior performance and reliability. The architecture of a Web Server has a profound impact on its performance and reliability. One of the dimensions used to characterize the architecture of a Web Server is the processing model employed in the Server, which describes the type of process or threading model used to support a Web Server operation. The main options for a processing model are process-based, thread-based or a hybrid of the process-based and the thread-based models. These options have unique advantages and disadvantages in terms of their performance and reliability tradeoffs. In this paper we propose an analysis methodology based on the stochastic reward net (SRN) modeling paradigm to quantify the performance and the reliability tradeoffs in the process-based and the thread-based Web Server Software architectures. We demonstrate the capability of the methodology to facilitate systematic, quantitative tradeoffs using several examples

Uri Schonfeld - One of the best experts on this subject based on the ideXlab platform.

  • Do not crawl in the DUST: Different URLs with similar text
    ACM Transactions on the Web, 2009
    Co-Authors: Ziv Bar-yossef, Idit Keidar, Uri Schonfeld
    Abstract:

    We consider the problem of DUST: Different URLs with Similar Text. Such duplicate URLs are prevalent in Web sites, as Web Server Software often uses aliases and redirections, and dynamically generates the same page from various different URL requests. We present a novel algorithm, DustBuster, for uncovering DUST; that is, for discovering rules that transform a given URL to others that are likely to have similar content. DustBuster mines DUST effectively from previous crawl logs or Web Server logs, without/examining page contents. Verifying these rules via sampling requires fetching few actual Web pages. Search engines can benefit from information about DUST to increase the effectiveness of crawling, reduce indexing overhead, and improve the quality of popularity statistics such as PageRank.

  • WWW - Do not crawl in the DUST: different URLs with similar text
    Proceedings of the 15th international conference on World Wide Web - WWW '06, 2006
    Co-Authors: Uri Schonfeld, Ziv Bar-yossef, Idit Keidar
    Abstract:

    We consider the problem of dust : Different URLs with Similar Text. Such duplicate URLs are prevalent in Web sites, as Web Server Software often uses aliases and redirections, translates URLs to some canonical form, and dynamically generates the same page from various different URL requests. We present a novel algorithm, DustBuster, for uncovering dust ; that is, for discovering rules for transforming a given URL to others that are likely to have similar content. DustBuster is able to detect dust effectively from previous crawl logs or Web Server logs, without examining page contents. Verifying these rules via sampling requires fetching few actual Web pages. Search engines can benefit from this information to increase the effectiveness of crawling, reduce indexing overhead as well as improve the quality of popularity statistics such as PageRank.