The Experts below are selected from a list of 21 Experts worldwide ranked by ideXlab platform
Hilal Maraditkremers - One of the best experts on this subject based on the ideXlab platform.
-
use of natural language processing algorithms to identify common data elements in operative notes for total hip arthroplasty
Journal of Bone and Joint Surgery American Volume, 2019Co-Authors: Cody C Wyles, Meagan E Tibbo, Yanshan Wang, Sunghwan Sohn, Walter K Kremers, Daniel J Berry, David G Lewallen, Hilal MaraditkremersAbstract:UpdateThis article was updated on December 6, 2019, because of a previous error. On page 1936, in Table VII, “Performance of the Bearing Surface Algorithm,” the row that had read “Bearing surface predicted by algorithm” now reads “Bearing surface predicted by algorithm*.”An erratum has been publishe
Huang Chien-lin - One of the best experts on this subject based on the ideXlab platform.
-
Design of IEEE 802.11n Multi-Rate LDPC Code Decoder
電機工程學系所, 2014Co-Authors: Huang Chien-linAbstract:低密度同位元查核碼(LDPC Codes)的錯誤更正效能非常好,使得未來幾代的無線通訊系統得以實現更高的速率。本論文提出了一個高產出量、高平行度、高彈性和高擴展性的非規則低密度同位元查核碼解碼系統的硬體電路,此電路完全符合IEEE 802.11n所規範的標準:三種編碼長度648、1296和1944位元以及四種編碼速率1/2、2/3、3/4和5/6。 我們提出的低密度同位元查核碼解碼器是一個參數化的矽智產元件,可藉由一個參數就可以改變輸入輸出端的資料寬度,我們所用的解碼理論為TDMP與SMSA演算法。此解碼器的架構為管線化和我們所提出來的矩陣重排技術,同時藉由我們提出的資料交換比例降低演算法去減少節點的動態資料交換比例,來達到低密度同位元查核碼解碼器的低功率需求。減少動態資料交換比例演算法藉由分層節點的週期性更新,可以減少編碼增益的損耗。此外,我們還進一步地改進了管線化結構與平行運算單元,更重要的是我們不需要任何記憶體單元來儲存運算時的資料。因為我們的特殊設計,使得在固定的狀況下可以同時解六組編碼,而且我們只需要一組繞線網路就可以完成資料的轉送。 此多碼率低密度同位元查核碼解碼器矽智產元件實現在幾個超大型積體電路技術,包括:台積電的0.18微米和聯華電子90奈米製程,另外此架構也被實現在元件可程式化邏輯閘陣列中(XC5VLX330)。我們提出來的多碼率低密度同位元查核碼解碼器與最新的研究相比有下列的優點:(1)完全符合IEEE 802.11n的規範(20/40 MHz);(2)大約只需要66%的面積;(3)減少22%的編碼能量消耗。Low-Density Parity-Check (LDPC) codes are one of the best error correcting codes that enable the future generations of wireless devices to achieve higher data rates. This thesis presents a high throughput, parallel, scalable and irregular LDPC decoding system hardware implementation that supports twelve combinations of block lengths 648, 1296 and 1944 bits and coding rate 1/2, 2/3, 3/4 and 5/6 based on IEEE 802.11n standard. Our proposed LDPC decoder is a parameterize IP core running the well-known TDMP and SMSA decoding algorithm. The decoder works in pipeline, very effective technique to rearrange the sequence of its elaborations is proposed in order to minimize the iteration latency, and our proposed reducing switch activity algorithm reduces active node switching activities to lower LDPC decoder power consumption. Layered nodes are periodically refreshed to minimize coding gain degradation. Moreover, we further improved the design with pipeline structure, parallel computation and no memory unit. Therefore, we can decode six different codewords at same time and only use one routing network to route data. The prototype architecture is being implemented on several VLSI technologies (TSMC 0.18 um and UMC 90 nm) and tested on the Xilinx Virtex-5 (XC5VLX330) FPGA. The proposed multi-rate LDPC decoder has the following advantages when compared to recent state-of-the-art architectures: (1) fully support IEEE 802.11n specification (20/40 MHz); (2) smaller normalized area about 66% in average; (3) reduced about 26% normalized energy.List of Table VII List of Figure ix Chapter 1 Introduction 1 1.1 Motivation and Contribution 1 1.2 Organization of This Thesis 2 Chapter 2 Low-Density Parity-Check Code 3 2.1 Fundamental Concept of LDPC Code 3 2.1.1 Message Passing Algorithm 4 2.1.1.1 Principle of Message Passing Algorithm 5 2.1.2 Code Construction 8 2.1.2.1 Gallager's Method [12] 8 2.1.2.2 MacKay's Method [13] 9 2.1.2.3 Construction by Quasi-Cyclic Code [14] 10 2.1.3 Encoding 11 2.1.3.1 Conventional Method 11 2.1.3.2 Richardson's Method [11] 12 2.1.3.3 Quasi-Cyclic Code [14] 14 2.1.4 Decoding 15 2.2 LDPC Code for IEEE 802.11n [16] 18 2.2.1 LDPC Coding Rate and Codeword Block Length 19 2.2.2 LDPC Encoder 20 2.2.3 Parity Check Matrices 20 2.2.4 LDPC PPDU Encoding Process 22 Chapter 3 Proposed Algorithm for Better Hardware Implementation 24 3.1 Decoding Algorithm Evolution 24 3.1.1 Sum-Production Algorithm (SPA) 24 3.1.2 Log-Likelihood Ratio Sum-Production Algorithm (LLR-SPA) 27 3.1.3 Min-Sum Algorithm (MSA) 30 3.1.3.1 Scaling Min-Sum Algorithm (SMSA) 31 3.2 Turbo Decoding Massage Passing (TDMP) vs. Two Phase Message Passing (TPMP) 32 3.3 Proposed Reducing Switch Activity for Decoding Algorithm 37 3.5 Floating-Point Simulation Results 41 3.5.1 Simulation Environment 41 3.5.2 Performance Comparison of LLR-SPA, MSA and SMSA 42 3.5.3 Performance Comparison of TPMP and TDMP 48 3.5.4 Performance Comparison of Different Reducing Switch Activity Algorithms 54 3.5.5 IEEE 802.11n LDPC Codes Performance with Our Proposed Decoding Algorithm 58 Chapter 4 Architecture Design and Circuit Implementation 62 4.1 The Specification of IEEE 802.11n LDPC Code Decoder 62 4.2 Fixed-Point Simulation Results 63 4.3 Decoder Design 70 4.3.1 Timing Scheduling 70 4.3.1.1 Reordering the H Matrix 70 4.3.1.2 Early Termination Strategy 81 4.3.1.3 Timing Diagram 83 4.3.2 Architecture Overview 86 4.3.2.1 Router Unit (RU) 89 4.3.2.2 Check Node Unit (CNU) 92 4.3.2.3 Variable Node Unit (VNU) 93 4.3.2.4 LLR Calculation Unit (LLR CU) 94 4.3 Hardware Implementation 95 Chapter 5 Conclusion and Future Work 100 5.1 Conclusion 100 5.2 Future Work 100 References 102 Appendix A Rate Dependent Parameters for HT Modulation and Coding Schemes (MCS) 111 Appendix B HT LDPC Matrix Definitions 117 Appendix C The reordered H matrix 12
Cody C Wyles - One of the best experts on this subject based on the ideXlab platform.
-
use of natural language processing algorithms to identify common data elements in operative notes for total hip arthroplasty
Journal of Bone and Joint Surgery American Volume, 2019Co-Authors: Cody C Wyles, Meagan E Tibbo, Yanshan Wang, Sunghwan Sohn, Walter K Kremers, Daniel J Berry, David G Lewallen, Hilal MaraditkremersAbstract:UpdateThis article was updated on December 6, 2019, because of a previous error. On page 1936, in Table VII, “Performance of the Bearing Surface Algorithm,” the row that had read “Bearing surface predicted by algorithm” now reads “Bearing surface predicted by algorithm*.”An erratum has been publishe
Gupta, Manish K - One of the best experts on this subject based on the ideXlab platform.
-
On Conflict Free DNA Codes
2019Co-Authors: Benerjee, Krishna Gopal, Deb Sourav, Gupta, Manish KAbstract:DNA storage has emerged as an important area of research. The reliability of DNA storage system depends on designing the DNA strings (called DNA codes) that are sufficiently dissimilar. In this work, we introduce DNA codes that satisfy a special constraint. Each codeword of the DNA code has a specific property that any two consecutive sub-strings of the DNA codeword will not be the same (a generalization of homo-polymers constraint). This is in addition to the usual constraints such as Hamming, reverse, reverse-complement and $GC$-content. We believe that the new constraint will help further in reducing the errors during reading and writing data into the synthetic DNA strings. We also present a construction (based on a variant of stochastic local search algorithm) to calculate the size of the DNA codes with all the above constraints, which improves the lower bounds from the existing literature, for some specific cases. Moreover, a recursive isometric map between binary vectors and DNA strings is proposed. Using the map and the well known binary codes we obtain few classes of DNA codes with all the constraints including the property that the constructed DNA codewords are free from the hairpin-like secondary structures.Comment: 12 pages, Draft (Table VI and Table VII are updated
Aico Van Vuuren - One of the best experts on this subject based on the ideXlab platform.
-
APPENDIX A: ADDITIONAL TableS Table VII
2016Co-Authors: Sebastian I. Buhai, Miguel A. Portela, Coen N. Teulings, Aico Van VuurenAbstract:We provide a web appendix to the paper “Returns to Tenure or Se-niority?”. It includes information on the residual autocovariances for within-job log wage innovations, in Appendix A, which we use to com-pute the standard deviation of the permanent shocks. In Appendix B, we show how one can estimate β1 and β2 for both the standard Topel and for the Topel variant with spell fixed effects specifications, when the model includes time dummy variables