Future Assertion Time

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6 Experts worldwide ranked by ideXlab platform

Tom Johnston - One of the best experts on this subject based on the ideXlab platform.

  • Deferred Assertions and Other Pipeline Datasets
    Managing Time in Relational Databases, 2020
    Co-Authors: Tom Johnston, Randall Weis
    Abstract:

    This chapter discusses the topic of pipeline datasets, in general, and of one kind of pipeline dataset—deferred Assertions—in particular. It begins by noting that deferred Assertions represent past, present, and Future versions in Future Assertion Time, but that past, present, and Future versions also exist in past and present Assertion Time. This gives nine categories of temporal data, one of which is currently asserted current versions of things, which known as conventional data, physically located in production tables. The other eight categories correspond to pipeline datasets, being data that has those production tables as either their destinations or their origins. Deferred Assertions are the result of applying deferred transactions to the database. Instead of holding on to maintenance transactions until it is the right Time to apply them, Asserted Versioning applies them right away, but does not immediately assert them. These deferred Assertions may be updated or deleted by themselves. Just as deferred Assertions replace collections of transactions that have not yet been applied to the database, bitemporal data in any of the other seven categories replaces other physically external datasets. Asserted version tables contain data in all these temporal categories and, in doing so, internalize what would otherwise be physically distinct datasets.

  • Future Assertion Time
    Bitemporal Data#R##N#Theory and Practice, 2014
    Co-Authors: Tom Johnston
    Abstract:

    This chapter introduces the concept of Future Assertion Time. It explains the semantics of this concept, and responds to the objection that there is not and cannot be any such thing as Future Assertion Time. It shows how the implementation of standard Assertion Time can be extended to include the management of Future Assertion Time. It introduces the concept of temporal locking, and shows that without it, a situation parallel to the paradoxes of Time travel can occur in databases.

Randall Weis - One of the best experts on this subject based on the ideXlab platform.

  • Deferred Assertions and Other Pipeline Datasets
    Managing Time in Relational Databases, 2020
    Co-Authors: Tom Johnston, Randall Weis
    Abstract:

    This chapter discusses the topic of pipeline datasets, in general, and of one kind of pipeline dataset—deferred Assertions—in particular. It begins by noting that deferred Assertions represent past, present, and Future versions in Future Assertion Time, but that past, present, and Future versions also exist in past and present Assertion Time. This gives nine categories of temporal data, one of which is currently asserted current versions of things, which known as conventional data, physically located in production tables. The other eight categories correspond to pipeline datasets, being data that has those production tables as either their destinations or their origins. Deferred Assertions are the result of applying deferred transactions to the database. Instead of holding on to maintenance transactions until it is the right Time to apply them, Asserted Versioning applies them right away, but does not immediately assert them. These deferred Assertions may be updated or deleted by themselves. Just as deferred Assertions replace collections of transactions that have not yet been applied to the database, bitemporal data in any of the other seven categories replaces other physically external datasets. Asserted version tables contain data in all these temporal categories and, in doing so, internalize what would otherwise be physically distinct datasets.