Skip to Main Content
A logically selective delay of a parallel output shift register can phase lock the input/output waveforms. The calculated precision of the phase-lock is related to the shift register's bit size and clock and input frequencies. This basic idea is extended to multistage (or multilevel) operations. For optimum LSI implementation, a technique is described which minimizes bit size and the delay function logic circuitry for a given precision. The method proves that for high-precision operations the shift register's bit size and the delay selecting logic circuitry are reduced, thus improving operating efficiency. Also discussed is the design tradeoff between level complexity and circuit size. Finally, extension to synchronization application is considered.