Difference between revisions of "Review:RTE"

From Pel
Jump to navigation Jump to search
Line 30: Line 30:
 
The following table presents the deliverables for the project.  
 
The following table presents the deliverables for the project.  
  
{| border="1pt" style="font-size: 5px; text-align: left; border-collapse: collapse;border: 1px solid black;"  
+
{| style="border: 1px;"
|- align="center"
+
|- style="text-align: center;"
 
!  Quarter  !!  Deliverables  !!  Status 
 
!  Quarter  !!  Deliverables  !!  Status 
 
|-
 
|-
Line 50: Line 50:
 
| Q3 2014 || Design and specification of an algorithm for database extrapolation <br/> Survey of stress testing techniques || &nbsp;In process&nbsp;
 
| Q3 2014 || Design and specification of an algorithm for database extrapolation <br/> Survey of stress testing techniques || &nbsp;In process&nbsp;
 
|-
 
|-
| Q3 2014 || Design of a stress testing methodology for Cloud applications <br/> Survey of self-tuning/automatic testing techniques || &nbsp;From Q3 2013&nbsp;
+
| Q3 2014 || Design of a stress testing methodology for Cloud applications <br/> Survey of self-tuning/automatic testing techniques || Not started
 
|-
 
|-
| Q3 2014 || Design of a self-tuning methodology for Cloud applications || &nbsp;From Q3 2013&nbsp;
+
| Q3 2014 || Design of a self-tuning methodology for Cloud applications || Not started
 
|-
 
|-
 
| Q1 2013 || Review and implementation of previous approach <br/> An analysis of Firefox bug report data || &nbsp;&nbsp;
 
| Q1 2013 || Review and implementation of previous approach <br/> An analysis of Firefox bug report data || &nbsp;&nbsp;

Revision as of 15:38, 29 May 2013

<accesscontrol>reviewer,pel</accesscontrol>

Robust Testing Environments (RTE)

Currently much time is spent in setting up system tests, trying to understand exactly what is being tested, understanding errors that occur during tests and verifying the test results. These issues can be especially difficult for distributed enterprise applications made up of a large number of software components that run on a collection of heterogeneous servers or in cloud environments. Load generation for accurate simulation of complex user behaviour is a growing challenge when testing large-scale critical systems. For example, with more and more applications moving to cloud-based infrastructures, typical load generation designed for applications residing on a local area network may be ineffective for robust testing. Research is required to develop new load generation techniques that take into consideration new cloud-based infrastructures where applications may be physically distributed across different infrastructures in different geographical locations and where network bandwidth may be unknown or at different quality levels at various points. This research will develop a set of prototype tools to allow for the more efficient system testing of large enterprise applications.

People involved

  • Sandra Theodora Buda - PhD Student, UCD
  • Vanessa Ayala - PhD Student, UCD
  • Seán Ó Rí­ordáin - PhD Student, TCD
  • Thomas Cerqueus - Postdoc researcher, UCD
  • Patrick McDonagh - Postdoc researcher, DCU
  • Anthony Ventresque - Postdoc researcher, UCD
  • Christina Thorpe - Postdoc researcher, UCD
  • Philip Perry - Postdoc researcher, UCD
  • Simon Wilson - Professor, TCD
  • John Murphy - Professor, UCD
  • Liam Murphy - Professor, UCD

Collaborations

  • IBM Dublin
  • Eduardo Cunha de Almeida - Assistant Professor, Federal University of Paraná (Brazil)
  • Gerson Sunyé - Associate Professor, University of Nantes (France)
  • Yves Le Traon, Professor, University of Luxembourg (Luxembourg)

Statement of work

The following table presents the deliverables for the project.

 Quarter   Deliverables   Status 
Q1 2013 Survey of anonymization techniques based on k-anonymity for microdata  Done 
Q3 2013 High-level architecture design of a privacy-preserving data publishing assisting tool
Survey of data mining algorithms
 In process 
Q3 2012 Initial algorithm to automatically extract relationships between schemas in the DB
Survey of algorithms for database sampling
 Done 
Q1 2013 Design and specification of an algorithm for database sampling (table-level)  Done 
Q4 2013 Evaluation of scalability and performance of the DB sampling algorithm  Done 
Q2 2014 Evaluation of the DB sampling algorithm on real test case  In process 
Q3 2014 Design and specification of an algorithm for database sampling (attribute-level)
Survey of algorithms for database extrapolation
 Done 
Q3 2014 Design and specification of an algorithm for database extrapolation
Survey of stress testing techniques
 In process 
Q3 2014 Design of a stress testing methodology for Cloud applications
Survey of self-tuning/automatic testing techniques
Not started
Q3 2014 Design of a self-tuning methodology for Cloud applications Not started
Q1 2013 Review and implementation of previous approach
An analysis of Firefox bug report data
  
Q4 2013 Survey of the literature on statistical reliability in large scale software systems   
Q3 2012 Questionnaire: Testing the Cloud  Done. 
Q3 2013 Organisation of the Testing The Cloud workshop (ISSTA 2013) In process