Packet Filter

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6513 Experts worldwide ranked by ideXlab platform

Sean Peisert - One of the best experts on this subject based on the ideXlab platform.

  • the medical science dmz a network design pattern for data intensive medical science
    Journal of the American Medical Informatics Association, 2018
    Co-Authors: Sean Peisert, Eli Dart, William K Barnett, Edward Balas, James Cuff, Robert L Grossman, Ari E Berman, Anurag Shankar, Brian Tierney
    Abstract:

    Author(s): Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian | Abstract: OBJECTIVE:We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. MATERIALS AND METHODS:High-end networking, Packet-Filter firewalls, network intrusion-detection systems. RESULTS:We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. DISCUSSION:The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. CONCLUSION:By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.

  • the medical science dmz
    Journal of the American Medical Informatics Association, 2016
    Co-Authors: Sean Peisert, Eli Dart, William K Barnett, Edward Balas, James Cuff, Ari E Berman, Anurag Shankar, Robert G Grossman, Brian Tierney
    Abstract:

    Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, Packet Filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.

Brian Tierney - One of the best experts on this subject based on the ideXlab platform.

  • the medical science dmz a network design pattern for data intensive medical science
    Journal of the American Medical Informatics Association, 2018
    Co-Authors: Sean Peisert, Eli Dart, William K Barnett, Edward Balas, James Cuff, Robert L Grossman, Ari E Berman, Anurag Shankar, Brian Tierney
    Abstract:

    Author(s): Peisert, Sean; Dart, Eli; Barnett, William; Balas, Edward; Cuff, James; Grossman, Robert L; Berman, Ari; Shankar, Anurag; Tierney, Brian | Abstract: OBJECTIVE:We describe a detailed solution for maintaining high-capacity, data-intensive network flows (eg, 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. MATERIALS AND METHODS:High-end networking, Packet-Filter firewalls, network intrusion-detection systems. RESULTS:We describe a "Medical Science DMZ" concept as an option for secure, high-volume transport of large, sensitive datasets between research institutions over national research networks, and give 3 detailed descriptions of implemented Medical Science DMZs. DISCUSSION:The exponentially increasing amounts of "omics" data, high-quality imaging, and other rapidly growing clinical datasets have resulted in the rise of biomedical research "Big Data." The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large datasets. Maintaining data-intensive flows that comply with the Health Insurance Portability and Accountability Act (HIPAA) and other regulations presents a new challenge for biomedical research. We describe a strategy that marries performance and security by borrowing from and redefining the concept of a Science DMZ, a framework that is used in physical sciences and engineering research to manage high-capacity data flows. CONCLUSION:By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.

  • the medical science dmz
    Journal of the American Medical Informatics Association, 2016
    Co-Authors: Sean Peisert, Eli Dart, William K Barnett, Edward Balas, James Cuff, Ari E Berman, Anurag Shankar, Robert G Grossman, Brian Tierney
    Abstract:

    Objective We describe use cases and an institutional reference architecture for maintaining high-capacity, data-intensive network flows (e.g., 10, 40, 100 Gbps+) in a scientific, medical context while still adhering to security and privacy laws and regulations. Materials and Methods High-end networking, Packet Filter firewalls, network intrusion detection systems. Results We describe a “Medical Science DMZ” concept as an option for secure, high-volume transport of large, sensitive data sets between research institutions over national research networks. Discussion The exponentially increasing amounts of “omics” data, the rapid increase of high-quality imaging, and other rapidly growing clinical data sets have resulted in the rise of biomedical research “big data.” The storage, analysis, and network resources required to process these data and integrate them into patient diagnoses and treatments have grown to scales that strain the capabilities of academic health centers. Some data are not generated locally and cannot be sustained locally, and shared data repositories such as those provided by the National Library of Medicine, the National Cancer Institute, and international partners such as the European Bioinformatics Institute are rapidly growing. The ability to store and compute using these data must therefore be addressed by a combination of local, national, and industry resources that exchange large data sets. Maintaining data-intensive flows that comply with HIPAA and other regulations presents a new challenge for biomedical research. Recognizing this, we describe a strategy that marries performance and security by borrowing from and redefining the concept of a “Science DMZ”—a framework that is used in physical sciences and engineering research to manage high-capacity data flows. Conclusion By implementing a Medical Science DMZ architecture, biomedical researchers can leverage the scale provided by high-performance computer and cloud storage facilities and national high-speed research networks while preserving privacy and meeting regulatory requirements.

William H. Mangione-smith - One of the best experts on this subject based on the ideXlab platform.

  • Deep network Packet Filter design for reconfigurable devices
    ACM Transactions on Embedded Computing Systems, 2008
    Co-Authors: Young H. Cho, William H. Mangione-smith
    Abstract:

    Most network routers and switches provide some protection against the network attacks. However, the rapidly increasing amount of damages reported over the past few years indicates the urgent need for tougher security. Deep-Packet inspection is one of the solutions to capture Packets that can not be identified using the traditional methods. It uses a list of signatures to scan the entire content of the Packet, providing the means to Filter harmful Packets out of the network. Since one signature does not depend on the other, the Filtering process has a high degree of parallelism. Most software and hardware deep-Packet Filters that are in use today execute the tasks under Von Neuman architecture. Such architecture can not fully take advantage of the parallelism. For instance, one of the most widely used network intrusion-detection systems, Snort, configured with 845 patterns, running on a dual 1-GHz Pentium III system, can sustain a throughput of only 50 Mbps. The poor performance is because of the fact that the processor is programmed to execute several tasks sequentially instead of simultaneously. We designed scalable deep-Packet Filters on field-programmable gate arrays (FPGAs) to search for all data-independent patterns simultaneously. With FPGAs, we have the ability to reprogram the Filter when there are any changes to the signature set. The smallest full-pattern matcher implementation for the latest Snort NIDS fits in a single 400k Xilinx FPGA (Spartan 3-XC3S400) with a sustained throughput of 1.6 Gbps. Given a larger FPGA, the design can scale linearly to support a greater number of patterns, as well as higher data throughput.

  • Fast reconfiguring deep Packet Filter for 1+ gigabit network
    Proceedings - 13th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, FCCM 2005, 2005
    Co-Authors: Young H. Cho, William H. Mangione-smith
    Abstract:

    Due to increasing number of network worms and virus, many computer network users are vulnerable to attacks. Unless network security systems use more advanced methods of content Filtering such as deep Packet inspection, the problem get worse. However, searching for patterns at multiple offsets in entire content of network Packet requires more processing power than most general purpose processor can provide. Thus, researchers have developed high performance parallel deep Packet Filters for reconfigurable devices. Although some reconfigurable systems can be generated automatically from pattern database, obtaining high performance result from each subsequent reconfiguration can be a time consuming process. We present a novel architecture for programmable parallel pattern matching coprocessor. By combining a scalable coprocessor with the compact reconfigurable Filter, we produce a hybrid system that is able to update the rules immediate during the time the new Filter is being compiled. We mapped our hybrid Filter for the latest Snort rule set on January 13, 2005, containing 2,044 unique patterns byte make up 32,384 bytes, onto a single Xilinx Virtex 4LX-XC4VLX15 FPGA with a Filtering rate of 2 Gbps.

  • Deep Packet Filter with Dedicated Logic and Read Only Memories
    12th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 2004
    Co-Authors: Young H. Cho, William H. Mangione-smith
    Abstract:

    Searching for multiple string patterns in a stream of data is a computationally expensive task. The speed of the search pattern module determines the overall performance of deep Packet inspection firewalls, intrusion detection systems (IDS), and intrusion prevention systems (IPS). For example, one open source IDS configured for 845 patterns, can sustain a throughput of only 50 Mbps running on a dual 1-GHz Pentium III system. Using such systems would not be practical for Filtering high speed networks with over 1 Gbps traffic. Some of these systems are implemented with field programmable gate arrays (FPGA) so that they are fast and programmable. However, such FPGA Filters tend to be too large to be mapped on to a single FPGA. By sharing the common sublogic in the design, we can effectively shrink the footprint of the Filter. Then, for a large subset of the patterns, the logic area can be further reduced by using a memory based architecture. These design methods allow our Filter for 2064 attack patterns to map onto a single Xilinx Spartan 3-XC3S2000 FPGA with a Filtering rate of over 3 Gbps of network traffic.

Lamfor Kwok - One of the best experts on this subject based on the ideXlab platform.

  • efm enhancing the performance of signature based network intrusion detection systems using enhanced Filter mechanism
    Computers & Security, 2014
    Co-Authors: Weizhi Meng, Lamfor Kwok
    Abstract:

    Abstract Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network Packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced Filter mechanism (named EFM ) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based Packet Filter, an exclusive signature matching component and a KNN-based false alarm Filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signature-based NIDS such as Snort in the aspects of Packet filtration, signature matching improvement and false alarm reduction without affecting network security.

  • adaptive blacklist based Packet Filter with a statistic based approach in network intrusion detection
    Journal of Network and Computer Applications, 2014
    Co-Authors: Yuxin Meng, Lamfor Kwok
    Abstract:

    Network intrusion detection systems (NIDS) are widely deployed in various network environments. Compared to an anomaly based NIDS, a signature-based NIDS is more popular in real-world applications, because of its relatively lower false alarm rate. However, the process of signature matching is a key limiting factor to impede the performance of a signature-based NIDS, in which the cost is at least linear to the size of an input string and the CPU occupancy rate can reach more than 80% in the worst case. In this paper, we develop an adaptive blacklist-based Packet Filter using a statistic-based approach aiming to improve the performance of a signature-based NIDS. The Filter employs a blacklist technique to help Filter out network Packets based on IP confidence and the statistic-based approach allows the blacklist generation in an adaptive way, that is, the blacklist can be updated periodically. In the evaluation, we give a detailed analysis of how to select weight values in the statistic-based approach, and investigate the performance of the Packet Filter with a DARPA dataset, a real dataset and in a real network environment. Our evaluation results under various scenarios show that our proposed Packet Filter is encouraging and effective to reduce the burden of a signature-based NIDS without affecting network security.

  • towards designing Packet Filter with a trust based approach using bayesian inference in network intrusion detection
    International Conference on Security and Privacy in Communication Systems, 2012
    Co-Authors: Yuxin Meng, Lamfor Kwok
    Abstract:

    Network intrusion detection systems (NIDSs) have become an essential part for current network security infrastructure. However, in a large-scale network, the overhead network Packets can greatly decrease the effectiveness of such detection systems by significantly increasing the processing burden of a NIDS. To mitigate this issue, we advocate that constructing a Packet Filter is a promising and complementary solution to reduce the workload of a NIDS, especially to reduce the burden of signature matching. We have developed a blacklist-based Packet Filter to help a NIDS Filter out network Packets and achieved positive experimental results. But the calculation of IP confidence is still a big challenge for our previous work. In this paper, we further design a Packet Filter with a trust-based method using Bayesian inference to calculate the IP confidence and explore its performance with a real dataset and in a network environment. We also analyze the trust-based method by comparing it with our previous weight-based method. The experimental results show that by using the trust-based calculation of IP confidence, our designed trust-based blacklist Packet Filter can achieve a better outcome.

Young H. Cho - One of the best experts on this subject based on the ideXlab platform.

  • Deep network Packet Filter design for reconfigurable devices
    ACM Transactions on Embedded Computing Systems, 2008
    Co-Authors: Young H. Cho, William H. Mangione-smith
    Abstract:

    Most network routers and switches provide some protection against the network attacks. However, the rapidly increasing amount of damages reported over the past few years indicates the urgent need for tougher security. Deep-Packet inspection is one of the solutions to capture Packets that can not be identified using the traditional methods. It uses a list of signatures to scan the entire content of the Packet, providing the means to Filter harmful Packets out of the network. Since one signature does not depend on the other, the Filtering process has a high degree of parallelism. Most software and hardware deep-Packet Filters that are in use today execute the tasks under Von Neuman architecture. Such architecture can not fully take advantage of the parallelism. For instance, one of the most widely used network intrusion-detection systems, Snort, configured with 845 patterns, running on a dual 1-GHz Pentium III system, can sustain a throughput of only 50 Mbps. The poor performance is because of the fact that the processor is programmed to execute several tasks sequentially instead of simultaneously. We designed scalable deep-Packet Filters on field-programmable gate arrays (FPGAs) to search for all data-independent patterns simultaneously. With FPGAs, we have the ability to reprogram the Filter when there are any changes to the signature set. The smallest full-pattern matcher implementation for the latest Snort NIDS fits in a single 400k Xilinx FPGA (Spartan 3-XC3S400) with a sustained throughput of 1.6 Gbps. Given a larger FPGA, the design can scale linearly to support a greater number of patterns, as well as higher data throughput.

  • Fast reconfiguring deep Packet Filter for 1+ gigabit network
    Proceedings - 13th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, FCCM 2005, 2005
    Co-Authors: Young H. Cho, William H. Mangione-smith
    Abstract:

    Due to increasing number of network worms and virus, many computer network users are vulnerable to attacks. Unless network security systems use more advanced methods of content Filtering such as deep Packet inspection, the problem get worse. However, searching for patterns at multiple offsets in entire content of network Packet requires more processing power than most general purpose processor can provide. Thus, researchers have developed high performance parallel deep Packet Filters for reconfigurable devices. Although some reconfigurable systems can be generated automatically from pattern database, obtaining high performance result from each subsequent reconfiguration can be a time consuming process. We present a novel architecture for programmable parallel pattern matching coprocessor. By combining a scalable coprocessor with the compact reconfigurable Filter, we produce a hybrid system that is able to update the rules immediate during the time the new Filter is being compiled. We mapped our hybrid Filter for the latest Snort rule set on January 13, 2005, containing 2,044 unique patterns byte make up 32,384 bytes, onto a single Xilinx Virtex 4LX-XC4VLX15 FPGA with a Filtering rate of 2 Gbps.

  • Deep Packet Filter with Dedicated Logic and Read Only Memories
    12th Annual IEEE Symposium on Field-Programmable Custom Computing Machines, 2004
    Co-Authors: Young H. Cho, William H. Mangione-smith
    Abstract:

    Searching for multiple string patterns in a stream of data is a computationally expensive task. The speed of the search pattern module determines the overall performance of deep Packet inspection firewalls, intrusion detection systems (IDS), and intrusion prevention systems (IPS). For example, one open source IDS configured for 845 patterns, can sustain a throughput of only 50 Mbps running on a dual 1-GHz Pentium III system. Using such systems would not be practical for Filtering high speed networks with over 1 Gbps traffic. Some of these systems are implemented with field programmable gate arrays (FPGA) so that they are fast and programmable. However, such FPGA Filters tend to be too large to be mapped on to a single FPGA. By sharing the common sublogic in the design, we can effectively shrink the footprint of the Filter. Then, for a large subset of the patterns, the logic area can be further reduced by using a memory based architecture. These design methods allow our Filter for 2064 attack patterns to map onto a single Xilinx Spartan 3-XC3S2000 FPGA with a Filtering rate of over 3 Gbps of network traffic.