Database Administrator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 8055 Experts worldwide ranked by ideXlab platform

B. Eaglestone - One of the best experts on this subject based on the ideXlab platform.

  • an integrity constraint for Database systems containing embedded static neural networks
    International Journal of Intelligent Systems, 2001
    Co-Authors: I. Millns, B. Eaglestone
    Abstract:

    Static neural networks are used in some Database systems to classify objects, but like traditional statistical classifiers they often misclassify. For some applications, it is necessary to bound the proportion of misclassified objects. This is clearly an integrity problem. We describe a new integrity constraint for Database systems with embedded static neural networks, with which a Database Administrator can enforce a bound on the proportion of misclassifications in a class. The approach is based upon mapping probabilities generated by a probabilistic neural network to the likely percentage of misclassifications.

  • An integrity constraint for Database systems containing embedded neural networks
    Proceedings Ninth International Workshop on Database and Expert Systems Applications (Cat. No.98EX130), 1998
    Co-Authors: I. Millns, B. Eaglestone
    Abstract:

    Neural networks are used in some Database systems to classify objects, but like traditional statistical classifiers they often misclassify. For some applications, it is necessary to bound the proportion of misclassified objects. This is clearly an integrity problem. We describe a new integrity constraint for Database systems with embedded neural networks, with which Database Administrator can enforce a bound on the proportion of misclassifications in a class. The approach is based upon mapping probabilities generated by a probablistic neural network to the likely percentage of misclassifications

Cynthia Dwork - One of the best experts on this subject based on the ideXlab platform.

  • our data ourselves privacy via distributed noise generation
    Lecture Notes in Computer Science, 2006
    Co-Authors: Cynthia Dwork, Krishnaram Kenthapadi, Frank Mcsherry, Ilya Mironov, Moni Naor
    Abstract:

    In this work we provide efficient distributed protocols for generating shares of random noise, secure against malicious participants. The purpose of the noise generation is to create a distributed implementation of the privacy-preserving statistical Databases described in recent papers [14,4,13]. In these Databases, privacy is obtained by perturbing the true answer to a Database query by the addition of a small amount of Gaussian or exponentially distributed random noise. The computational power of even a simple form of these Databases, when the query is just of the form Σ i f(d i ), that is, the sum over all rows i in the Database of a function f applied to the data in row i, has been demonstrated in [4]. A distributed implementation eliminates the need for a trusted Database Administrator. The results for noise generation are of independent interest. The generation of Gaussian noise introduces a technique for distributing shares of many unbiased coins with fewer executions of verifiable secret sharing than would be needed using previous approaches (reduced by a factor of n). The generation of exponentially distributed noise uses two shallow circuits: one for generating many arbitrarily but identically biased coins at an amortized cost of two unbiased random bits apiece, independent of the bias, and the other to combine bits of appropriate biases to obtain an exponential distribution.

  • privacy preserving datamining on vertically partitioned Databases
    International Cryptology Conference, 2004
    Co-Authors: Cynthia Dwork, Kobbi Nissim
    Abstract:

    In a recent paper Dinur and Nissim considered a statistical Database in which a trusted Database Administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the Database, a substantial amount of noise is required to avoid a breach, rendering the Database almost useless.

  • privacy preserving datamining on vertically partitioned Databases
    Lecture Notes in Computer Science, 2004
    Co-Authors: Cynthia Dwork, Kobbi Nissim
    Abstract:

    In a recent paper Dinur and Nissim considered a statistical Database in which a trusted Database Administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the Database, a substantial amount of noise is required to avoid a breach, rendering the Database almost useless. As Databases grow increasingly large, the possibility of being able to query only a sub-linear number of times becomes realistic. We further investigate this situation, generalizing the previous work in two important directions: multi-attribute Databases (previous work dealt only with single-attribute Databases) and vertically partitioned Databases, in which different subsets of attributes are stored in different Databases. In addition, we show how to use our techniques for datamining on published noisy statistics.

Kobbi Nissim - One of the best experts on this subject based on the ideXlab platform.

  • privacy preserving datamining on vertically partitioned Databases
    International Cryptology Conference, 2004
    Co-Authors: Cynthia Dwork, Kobbi Nissim
    Abstract:

    In a recent paper Dinur and Nissim considered a statistical Database in which a trusted Database Administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the Database, a substantial amount of noise is required to avoid a breach, rendering the Database almost useless.

  • privacy preserving datamining on vertically partitioned Databases
    Lecture Notes in Computer Science, 2004
    Co-Authors: Cynthia Dwork, Kobbi Nissim
    Abstract:

    In a recent paper Dinur and Nissim considered a statistical Database in which a trusted Database Administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the Database, a substantial amount of noise is required to avoid a breach, rendering the Database almost useless. As Databases grow increasingly large, the possibility of being able to query only a sub-linear number of times becomes realistic. We further investigate this situation, generalizing the previous work in two important directions: multi-attribute Databases (previous work dealt only with single-attribute Databases) and vertically partitioned Databases, in which different subsets of attributes are stored in different Databases. In addition, we show how to use our techniques for datamining on published noisy statistics.

I. Millns - One of the best experts on this subject based on the ideXlab platform.

  • an integrity constraint for Database systems containing embedded static neural networks
    International Journal of Intelligent Systems, 2001
    Co-Authors: I. Millns, B. Eaglestone
    Abstract:

    Static neural networks are used in some Database systems to classify objects, but like traditional statistical classifiers they often misclassify. For some applications, it is necessary to bound the proportion of misclassified objects. This is clearly an integrity problem. We describe a new integrity constraint for Database systems with embedded static neural networks, with which a Database Administrator can enforce a bound on the proportion of misclassifications in a class. The approach is based upon mapping probabilities generated by a probabilistic neural network to the likely percentage of misclassifications.

  • An integrity constraint for Database systems containing embedded neural networks
    Proceedings Ninth International Workshop on Database and Expert Systems Applications (Cat. No.98EX130), 1998
    Co-Authors: I. Millns, B. Eaglestone
    Abstract:

    Neural networks are used in some Database systems to classify objects, but like traditional statistical classifiers they often misclassify. For some applications, it is necessary to bound the proportion of misclassified objects. This is clearly an integrity problem. We describe a new integrity constraint for Database systems with embedded neural networks, with which Database Administrator can enforce a bound on the proportion of misclassifications in a class. The approach is based upon mapping probabilities generated by a probablistic neural network to the likely percentage of misclassifications

Carlo Zaniolo - One of the best experts on this subject based on the ideXlab platform.

  • automating Database schema evolution in information system upgrades
    International Workshop on Hot Topics in Software Upgrades, 2009
    Co-Authors: Carlo Curino, Hyun Jin Moon, Carlo Zaniolo
    Abstract:

    The complexity, cost, and down-time currently created by the Database schema evolution process is the source of incessant problems in the life of information systems and a major stumbling block that prevent graceful upgrades. Furthermore, our studies shows that the serious problems encountered by traditional information systems are now further exacerbated in web information systems and cooperative scientific Databases where the frequency of schema changes has increased while tolerance for downtimes has nearly disappeared. The PRISM project seeks to develop the methods and tools that turn this error-prone and time-consuming process into one that is controllable, predictable and avoids down-time. Toward this goal, we have assembled a large testbed of schema evolution histories, and developed a language of Schema Modification Operators (SMO) to express concisely these histories. Using this language, the Database Administrator can specify new schema changes, and then rely on PRISM to (i) predict the effect of these changes on current applications, (ii) translate old queries and updates to work on the new schema version, (iii) perform data migration, and (iv) generate full documentation of intervened changes. Furthermore, PRISM achieves good usability and scalability by incorporating recent advances on mapping composition and invertibility in the implementation of (ii). The progress in automating schema evolution so achieved provides the enabling technology for other advances, such as light-weight Database design methodologies that embrace changes as the regular state of software. While these topics remain largely unexplored, and thus provide rich opportunities for future research, an important area which we have been investigated is that of archival information systems, where PRISM query mapping techniques were used to support flashback and historical queries for Database archives under schema evolution.

  • managing the history of metadata in support for db archiving and schema evolution
    International Conference on Conceptual Modeling, 2008
    Co-Authors: Carlo Curino, Hyun Jin Moon, Carlo Zaniolo
    Abstract:

    Modern information systems, and web information systems in particular, are faced with frequent Database schema changes, which generate the necessity to manage them and preserve the schema evolution history. In this paper, we describe the Panta Rhei Frameworkdesigned to provide powerful tools that: (i) facilitate schema evolution and guide the Database Administrator in planning and evaluating changes, (ii) support automatic rewriting of legacy queries against the current schema version, (iii) enable efficient archiving of the histories of data and metadata, and (iv) support complex temporal queries over such histories. We then introduce the Historical Metadata Manager (HMM), a tool designed to facilitate the process of documenting and querying the schema evolution itself. We use the schema history of the Wikipedia Database as a telling example of the many uses and benefits of HMM.