The Experts below are selected from a list of 336 Experts worldwide ranked by ideXlab platform
Jack Dongarra - One of the best experts on this subject based on the ideXlab platform.
-
scalable Networked Information processing environment snipe
Future Generation Computer Systems, 1999Co-Authors: Graham E Fagg, Keith Moore, Jack DongarraAbstract:Abstract Scalable Networked Information Processing Environment (SNIPE) is a metacomputing system that aims to provide a reliable, secure, fault-tolerant environment for long-term distributed computing applications and data stores across the global Internet. This system combines global naming and replication of both processing and data to support large-scale Information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments. To facilitate this the system supports: distributed data collection, distributed computation, distributed control and resource management, distributed output and process migration. The underlying system supports multiple communication paths, media and routing methods to aid performance and robustness across both local and global networks. This paper details the goals, design and an initial implementation of SNIPE, and then demonstrates its usefulness in supporting a middleware project. Initial communications performance is also presented.
-
scalable Networked Information processing environment snipe
Conference on High Performance Computing (Supercomputing), 1997Co-Authors: Graham E Fagg, Keith Moore, Jack Dongarra, Al GeistAbstract:SNIPE is a metacomputing system that aims to provide a reliable, secure, fault-tolerant environment for long-term distributed computing applications and data stores across the global InterNet. This system combines global naming and replication of both processing and data to support large scale Information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments. To facilitate this the system supports: distributed data collection, distributed computation, distributed control and resource management, distributed output and process migration. The underlying system supports multiple communication paths, media and routing methods to aid performance and robustness across both local and global networks.
Graham E Fagg - One of the best experts on this subject based on the ideXlab platform.
-
scalable Networked Information processing environment snipe
Future Generation Computer Systems, 1999Co-Authors: Graham E Fagg, Keith Moore, Jack DongarraAbstract:Abstract Scalable Networked Information Processing Environment (SNIPE) is a metacomputing system that aims to provide a reliable, secure, fault-tolerant environment for long-term distributed computing applications and data stores across the global Internet. This system combines global naming and replication of both processing and data to support large-scale Information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments. To facilitate this the system supports: distributed data collection, distributed computation, distributed control and resource management, distributed output and process migration. The underlying system supports multiple communication paths, media and routing methods to aid performance and robustness across both local and global networks. This paper details the goals, design and an initial implementation of SNIPE, and then demonstrates its usefulness in supporting a middleware project. Initial communications performance is also presented.
-
scalable Networked Information processing environment snipe
Conference on High Performance Computing (Supercomputing), 1997Co-Authors: Graham E Fagg, Keith Moore, Jack Dongarra, Al GeistAbstract:SNIPE is a metacomputing system that aims to provide a reliable, secure, fault-tolerant environment for long-term distributed computing applications and data stores across the global InterNet. This system combines global naming and replication of both processing and data to support large scale Information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments. To facilitate this the system supports: distributed data collection, distributed computation, distributed control and resource management, distributed output and process migration. The underlying system supports multiple communication paths, media and routing methods to aid performance and robustness across both local and global networks.
Martin Abadi - One of the best experts on this subject based on the ideXlab platform.
-
Unified declarative platform for secure Networked Information systems
Proceedings - International Conference on Data Engineering, 2009Co-Authors: Wenchao Zhou, Boon Thau Loo, Yun Mao, Martin AbadiAbstract:We present a unified declarative platform for specifying, implementing, and analyzing secure Networked Information systems. Our work builds upon techniques from logic-based trust management systems, declarative networking, and data analysis via provenance. We make the following contributions. First, we propose the secure network datalog (SeNDlog) language that unifies Binder, a logic-based language for access control in distributed systems, and Network Datalog, a distributed recursive query language for declarative networks. SeNDlog enables network routing, Information systems, and their security policies to be specified and implemented within a common declarative framework. Second, we extend existing distributed recursive query processing techniques to execute SeNDlog programs that incorporate authenticated communication among untrusted nodes. Third, we demonstrate that distributed network provenance can be supported naturally within our declarative framework for network security analysis and diagnostics. Finally, using a local cluster and the PlanetLab testbed, we perform a detailed performance study of a variety of secure Networked systems implemented using our platform.
Keith Moore - One of the best experts on this subject based on the ideXlab platform.
-
scalable Networked Information processing environment snipe
Future Generation Computer Systems, 1999Co-Authors: Graham E Fagg, Keith Moore, Jack DongarraAbstract:Abstract Scalable Networked Information Processing Environment (SNIPE) is a metacomputing system that aims to provide a reliable, secure, fault-tolerant environment for long-term distributed computing applications and data stores across the global Internet. This system combines global naming and replication of both processing and data to support large-scale Information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments. To facilitate this the system supports: distributed data collection, distributed computation, distributed control and resource management, distributed output and process migration. The underlying system supports multiple communication paths, media and routing methods to aid performance and robustness across both local and global networks. This paper details the goals, design and an initial implementation of SNIPE, and then demonstrates its usefulness in supporting a middleware project. Initial communications performance is also presented.
-
scalable Networked Information processing environment snipe
Conference on High Performance Computing (Supercomputing), 1997Co-Authors: Graham E Fagg, Keith Moore, Jack Dongarra, Al GeistAbstract:SNIPE is a metacomputing system that aims to provide a reliable, secure, fault-tolerant environment for long-term distributed computing applications and data stores across the global InterNet. This system combines global naming and replication of both processing and data to support large scale Information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments. To facilitate this the system supports: distributed data collection, distributed computation, distributed control and resource management, distributed output and process migration. The underlying system supports multiple communication paths, media and routing methods to aid performance and robustness across both local and global networks.
Wei Zhen - One of the best experts on this subject based on the ideXlab platform.
-
a novel whole view test approach for onsite commissioning in smart substation
Power and Energy Society General Meeting, 2014Co-Authors: Shi Jing, Qi Huang, Wei ZhenAbstract:Substation is the heart of an interconnected power system. With the development of smart grid, the concept of smart substation is proposed. The objectives of smart substation are to build an efficient Networked Information management platform, increasing the flexibility in the organization and distribution of the Information, and greatly enhancing the capabilities of Information exchanging and processing in substation. However, the onsite commissioning of smart substation is much more complex than that of traditional ones, because the digital Information is more difficult to measure and a complex Networked Information flow needs to be testified. In this paper, a novel test approach, based on wireless time synchronization and distributed injection of simulated data, is proposed for the onsite commissioning of the secondary system in a smart substation. The system components are built and tested. Technical issues are solved to satisfy the requirements of onsite commissioning, and the built system has been used for commissioning services of several newly built smart substations.
-
a novel whole view test approach for onsite commissioning in smart substation
IEEE Transactions on Power Delivery, 2013Co-Authors: Shi Jing, Qi Huang, Wei ZhenAbstract:The substation is the heart of an interconnected power system. With the development of smart grid, the concept of a smart substation is proposed. The objectives of smart substations are to build an efficient Networked Information-management platform, increasing the flexibility in the organization and distribution of the Information, and greatly enhancing the capabilities of Information exchanging and processing in the substation. However, the onsite commissioning of the smart substation is more complex than that of traditional ones, because the digital Information is more difficult to measure and a complex Networked Information flow needs to be testified. In this paper, a novel test approach, based on wireless time synchronization and distributed injection of simulated data, is proposed for the onsite commissioning of the secondary system in a smart substation. The system components are built and tested. Technical issues are solved to satisfy the requirements of onsite commissioning, and the built system has been used for commissioning services of several newly built smart substations.