Národní úložiště šedé literatury Nalezeno 5 záznamů.  Hledání trvalo 0.01 vteřin. 
Fault tolerant systems design automation
Lojda, Jakub ; Plíva, Zdeněk (oponent) ; Steininger, Andreas (oponent) ; Sekanina, Lukáš (vedoucí práce)
If a digital system is required to maintain a high level of reliability, it must withstand the presence of naturally-emerging failures. Many of such systems utilize Field Programmable Gate Arrays (FPGAs). One of the approaches to increase the system's reliability is the insertion of the so-called Fault Tolerance (FT) mechanisms. It is, however, a significant challenge to design systems to be FT. In this thesis, an approach is designed and researched, capable of automatically transforming an unhardened design into its FT version. The thesis emphasizes the generality of such a process, which allows for the reusability of the methods among various description formats, languages, and abstraction levels. This thesis describes the proposed method and its main aspects: the source code modification approaches, design strategies, and acceleration of FT parameters measurement. Last but not least, design flows that target the minimization of required measurements are proposed, which significantly accelerates the complete automated design of the FT system. Several cases were experimentally studied during the research presented in this thesis. Multiple circuits described in different languages were targeted with various reliability metrics to cover multiple scenarios. The first steps use a robot controller written in C++ as a target for evaluating the source code manipulations and the so-called critical bits representation of an FPGA design. After that, our C++ benchmark circuits were used instead of the robot controller. At first, a strategy based on the Multiple-choice Knapsack Problem (MCKP) was used to automatically select the most suitable hardening from available hardening schemes (e.g., Triple Modular Redundancy, or N-modular Redundancy). The proposed design strategy found a solution with 18% fewer critical bits while even lowering the design size overhead compared to the previous approach with the static allocation of FT mechanisms. After that, means of FT mechanism insertion were implemented for VHDL. VHDL benchmarks were also used with the MCKP strategy to find solutions with the best Median Time to Failure (a.k.a. t50). For the actual case study, circa 25% savings in the area were achieved compared to the reference design to which the FT mechanisms were assigned statically and manually. The method allows the user to constrain the available chip area and obtain the result optimal on reliability for this given area (under assumptions specified in the thesis). Also, system recovery was tested, which further improved the t50 results by 70%. Finally, a comprehensive case was studied on a real circuit, the FPGA reconfiguration controller. This presents a method of finding a Pareto-frontier of optimal designs considering multiple criteria (i.e., power consumption, size, and Mean Time to Failure - MTTF). The method exploits the principles of dynamic partial reconfiguration.
New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.
Fault tolerant systems design automation
Lojda, Jakub ; Plíva, Zdeněk (oponent) ; Steininger, Andreas (oponent) ; Sekanina, Lukáš (vedoucí práce)
If a digital system is required to maintain a high level of reliability, it must withstand the presence of naturally-emerging failures. Many of such systems utilize Field Programmable Gate Arrays (FPGAs). One of the approaches to increase the system's reliability is the insertion of the so-called Fault Tolerance (FT) mechanisms. It is, however, a significant challenge to design systems to be FT. In this thesis, an approach is designed and researched, capable of automatically transforming an unhardened design into its FT version. The thesis emphasizes the generality of such a process, which allows for the reusability of the methods among various description formats, languages, and abstraction levels. This thesis describes the proposed method and its main aspects: the source code modification approaches, design strategies, and acceleration of FT parameters measurement. Last but not least, design flows that target the minimization of required measurements are proposed, which significantly accelerates the complete automated design of the FT system. Several cases were experimentally studied during the research presented in this thesis. Multiple circuits described in different languages were targeted with various reliability metrics to cover multiple scenarios. The first steps use a robot controller written in C++ as a target for evaluating the source code manipulations and the so-called critical bits representation of an FPGA design. After that, our C++ benchmark circuits were used instead of the robot controller. At first, a strategy based on the Multiple-choice Knapsack Problem (MCKP) was used to automatically select the most suitable hardening from available hardening schemes (e.g., Triple Modular Redundancy, or N-modular Redundancy). The proposed design strategy found a solution with 18% fewer critical bits while even lowering the design size overhead compared to the previous approach with the static allocation of FT mechanisms. After that, means of FT mechanism insertion were implemented for VHDL. VHDL benchmarks were also used with the MCKP strategy to find solutions with the best Median Time to Failure (a.k.a. t50). For the actual case study, circa 25% savings in the area were achieved compared to the reference design to which the FT mechanisms were assigned statically and manually. The method allows the user to constrain the available chip area and obtain the result optimal on reliability for this given area (under assumptions specified in the thesis). Also, system recovery was tested, which further improved the t50 results by 70%. Finally, a comprehensive case was studied on a real circuit, the FPGA reconfiguration controller. This presents a method of finding a Pareto-frontier of optimal designs considering multiple criteria (i.e., power consumption, size, and Mean Time to Failure - MTTF). The method exploits the principles of dynamic partial reconfiguration.
Fault tolerant systems design automation
Lojda, Jakub ; Plíva, Zdeněk (oponent) ; Steininger, Andreas (oponent) ; Sekanina, Lukáš (vedoucí práce)
If a digital system is required to maintain a high level of reliability, it must withstand the presence of naturally-emerging failures. Many of such systems utilize Field Programmable Gate Arrays (FPGAs). One of the approaches to increase the system's reliability is the insertion of the so-called Fault Tolerance (FT) mechanisms. It is, however, a significant challenge to design systems to be FT. In this thesis, an approach is designed and researched, capable of automatically transforming an unhardened design into its FT version. The thesis emphasizes the generality of such a process, which allows for the reusability of the methods among various description formats, languages, and abstraction levels. This thesis describes the proposed method and its main aspects: the source code modification approaches, design strategies, and acceleration of FT parameters measurement. Last but not least, design flows that target the minimization of required measurements are proposed, which significantly accelerates the complete automated design of the FT system. Several cases were experimentally studied during the research presented in this thesis. Multiple circuits described in different languages were targeted with various reliability metrics to cover multiple scenarios. The first steps use a robot controller written in C++ as a target for evaluating the source code manipulations and the so-called critical bits representation of an FPGA design. After that, our C++ benchmark circuits were used instead of the robot controller. At first, a strategy based on the Multiple-choice Knapsack Problem (MCKP) was used to automatically select the most suitable hardening from available hardening schemes (e.g., Triple Modular Redundancy, or N-modular Redundancy). The proposed design strategy found a solution with 18% fewer critical bits while even lowering the design size overhead compared to the previous approach with the static allocation of FT mechanisms. After that, means of FT mechanism insertion were implemented for VHDL. VHDL benchmarks were also used with the MCKP strategy to find solutions with the best Median Time to Failure (a.k.a. t50). For the actual case study, circa 25% savings in the area were achieved compared to the reference design to which the FT mechanisms were assigned statically and manually. The method allows the user to constrain the available chip area and obtain the result optimal on reliability for this given area (under assumptions specified in the thesis). Also, system recovery was tested, which further improved the t50 results by 70%. Finally, a comprehensive case was studied on a real circuit, the FPGA reconfiguration controller. This presents a method of finding a Pareto-frontier of optimal designs considering multiple criteria (i.e., power consumption, size, and Mean Time to Failure - MTTF). The method exploits the principles of dynamic partial reconfiguration.
New Methods for Increasing Efficiency and Speed of Functional Verification
Zachariášová, Marcela ; Dohnal, Jan (oponent) ; Steininger, Andreas (oponent) ; Kotásek, Zdeněk (vedoucí práce)
In the development of current hardware systems, e.g. embedded systems or computer hardware, new ways how to increase their reliability are highly investigated. One way how to tackle the issue of reliability is to increase the efficiency and the speed of verification processes that are performed in the early phases of the design cycle. In this Ph.D. thesis, the attention is focused on the verification approach called functional verification. Several challenges and problems connected with the efficiency and the speed of functional verification are identified and reflected in the goals of the Ph.D. thesis. The first goal focuses on the reduction of the simulation runtime when verifying complex hardware systems. The reason is that the simulation of inherently parallel hardware systems is very slow in comparison to the speed of real hardware. The optimization technique is proposed that moves the verified system into the FPGA acceleration board while the rest of the verification environment runs in simulation. By this single move, the simulation overhead can be significantly reduced. The second goal deals with manually written verification environments which represent a huge bottleneck in the verification productivity. However, it is not reasonable, because almost all verification environments have the same structure as they utilize libraries of basic components from the standard verification methodologies. They are only adjusted to the system that is verified. Therefore, the second optimization technique takes the high-level specification of the system and then automatically generates a comprehensive verification environment for this system. The third goal elaborates how the completeness of the verification process can be achieved using the intelligent automation. The completeness is measured by different coverage metrics and the verification is usually ended when a satisfying level of coverage is achieved. Therefore, the third optimization technique drives generation of input stimuli in order to activate multiple coverage points in the veri\-fied system and to enhance the overall coverage rate. As the main optimization tool the genetic algorithm is used, which is adopted for the functional verification purposes and its parameters are well-tuned for this domain. It is running in the background of the verification process, it analyses the coverage and it dynamically changes constraints of the stimuli generator. Constraints are represented by the probabilities using which particular values from the input domain are selected.       The fourth goal discusses the re-usability of verification stimuli for regression testing and how these stimuli can be further optimized in order to speed-up the testing. It is quite common in verification that until a satisfying level of coverage is achieved, many redundant stimuli are evaluated as they are produced by pseudo-random generators. However, when creating optimal regression suites, redundancy is not needed anymore and can be removed. At the same time, it is important to retain the same level of coverage in order to check all the key properties of the system. The fourth optimization technique is also based on the genetic algorithm, but it is not integrated into the verification process but works offline after the verification is ended. It removes the redundancy from the original suite of stimuli very fast and effectively so the resulting verification runtime of the regression suite is significantly improved.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.