National Repository of Grey Literature 3 records found  Search took 0.00 seconds. 
Performance in Software Development Cycle: Regression Benchmarking
Kalibera, Tomáš ; Tůma, Petr (advisor) ; Hauswirth, Matthias (referee) ; Eeckhout, Lieven (referee)
The development cycle of large software is necessarily prone to introducing software errors that are hard to find and fix. Automated regular testing (regression testing) is a popular method used to reduce the cost of finding and fixing functionality errors, but it neglects software performance. The thesis focuses on performance errors, enabling automated detection of performance changes during software development (regression benchmarking). The key investigated problem is non-determinism in computer systems, which causes performance fluctuations. The problem is addressed by a novel benchmarking methodology based on statistical methods. The methodology is evaluated on a large open-source project Mono, detecting daily performance changes since August 2004, and on open-source CORBA implementations omniORB and TAO. The benchmark automation is a complex task in itself. As suggested by experience with compilation of weather forecast model Arpege/Aladin and implementation of component model SOFA, large systems place distinguishing demands on tasks such as automated compilation or execution. Complemented by experience from Mono benchmarking, the thesis proposes an architecture of a generic environment for automated regression benchmarking. The environment is being implemented by master students under supervision of...
Improving Accuracy of Software Performance Models on Multicore Platforms with Shared Caches
Babka, Vlastimil ; Tůma, Petr (advisor) ; Eeckhout, Lieven (referee) ; Black-Schaffer, David (referee)
The context of this work are performance models of software systems, which are used for predicting performance of a system in its design phase. For this purpose, performance models capture the explicit interactions of software components that make up the system, and the resource demands of primitive actions performed by the components. On contemporary hardware platforms, the software components however interact also through implicit sharing of numerous resources such as processor caches, which influence the performance of the primitive actions. Implicit resource sharing is often omitted in performance models, which affects their prediction accuracy. In this work we introduce two methods for including resource sharing models in performance models. Next, we propose an approximate resource sharing model based on linear regression, and a detailed model for predicting performance impact of cache sharing. The cache model is validated on a real processor and its design is preceded by extensive experiments which investigate the performance aspects of cache sharing. In addition, we introduce a method for robust validation of performance models using many automatically generated applications.
Performance in Software Development Cycle: Regression Benchmarking
Kalibera, Tomáš ; Tůma, Petr (advisor) ; Hauswirth, Matthias (referee) ; Eeckhout, Lieven (referee)
The development cycle of large software is necessarily prone to introducing software errors that are hard to find and fix. Automated regular testing (regression testing) is a popular method used to reduce the cost of finding and fixing functionality errors, but it neglects software performance. The thesis focuses on performance errors, enabling automated detection of performance changes during software development (regression benchmarking). The key investigated problem is non-determinism in computer systems, which causes performance fluctuations. The problem is addressed by a novel benchmarking methodology based on statistical methods. The methodology is evaluated on a large open-source project Mono, detecting daily performance changes since August 2004, and on open-source CORBA implementations omniORB and TAO. The benchmark automation is a complex task in itself. As suggested by experience with compilation of weather forecast model Arpege/Aladin and implementation of component model SOFA, large systems place distinguishing demands on tasks such as automated compilation or execution. Complemented by experience from Mono benchmarking, the thesis proposes an architecture of a generic environment for automated regression benchmarking. The environment is being implemented by master students under supervision of...

Interested in being notified about new results for this query?
Subscribe to the RSS feed.