Expert answer:Effective Quality Control Measure Policy Making an

  

Solved by verified expert:Supply chain and ERP system failures due to Ineffective quality measure in policy makingProblem statement: How Ineffective way of resource planning causes quality failures in OrganizationSupply chain management is the flow of goods and services to meet the demand and supply. Quality and accountability are the two major building blocks for any organization to excel. Having Quality control improves the profitability ration in any company but Quality control alone won’t make up the organization lead towards success, having effective resource planning and managing the resources at right time is equally important to consider in policy making. Our paper focuses on how supply chain management and ERP systems are interlinked in various organizations and how organizations implement quality control in their deliveries and the level of impact that can cost the organization when quality control and resource planning fails and also what measures can be taken to avoid such failures in real-time.References regarding the issues:https://rd.springer.com/article/10.1007/s10551-011…https://nbs.net/p/just-do-it-how-nike-turned-a-sup…https://www.360cloudsolutions.com/resources/top-si…https://acadpubl.eu/hub/2018-120-5/4/355.pdfhttps://astro.temple.edu/~wurban/Case%20Studies/HP…Question:i need you to work on below topics with 350 words for each topic-Effective Quality control measure-Effective Policy Making or Decision Makingi need ppt sides as wellBelow attached topics are for Effective Quality control measure & (which i found they should be peer reviewed articles)i can send you more reference documents for Effective Policy Making or Decision Making topic shortlyYou can use in your own references*** but they need to be peer reviewed ArticlesNote: i need APA format
06825665.pdf

06918609.pdf

Don't use plagiarized sources. Get Your Custom Essay on
Expert answer:Effective Quality Control Measure Policy Making an
Just from $10/Page
Order Essay

08358382.pdf

Unformatted Attachment Preview

2014 IEEE International Conference on Software Testing, Verification, and Validation Workshops
A Fault Model Framework for Quality Assurance
Dominik Holling
Technische Universität München, Germany
holling@in.tum.de
Advisor: Prof. Dr. Alexander Pretschner Project: Industry (embedded systems)
III. T HE P ROBLEM
Abstract—This Ph.D thesis proposes a testing methodology
based on fault models with an encompassing fault model lifecycle
framework. Fault models have superior fault detection ability
w.r.t. random testing by capturing what “usually goes wrong”.
Turning them operational yields the (semi-)automatic generation
of test cases directly targeting the captured faults/failures. Each
operationalization requires an initial effort for fault/failure description and creation of a test case generator, which is possibly
domain-/test level-/application-specific. To allow planning and
controlling this effort in practice, a fault model lifecycle framework is proposed capturing the testing methodology. It allows
tailoring itself to processes used in organizations and integration
into existing quality assurance activities. The contribution of
this Ph.D thesis is testing methodology based on fault models to
generate test cases and a lifecycle framework for its real-world
application in organizations.
Index Terms—fault model; fault based testing; mutation testing; test case generation; quality assurance;
Weyuker and Jeng question whether existing test selection
criteria are able to capture this distribution of faults and are
worth their selection effort. Good test cases (see section I)
intuitively have superior effectiveness w.r.t. random testing
and thereby justify their derivation effort. Thus, the problem
to be addressed is: How to define a methodology for the
derivation of good test cases and enable its integration into
quality assurance in practice.
Assume there is a method to create an input space partition
reflecting the distribution of faults. By only examining this
partition, all failure-causing inputs are revealed making any
testing abundant. Since finding such a partition is infeasible,
an input space partition based on a hypothesized distribution
of faults should be used. The term fault model describes
“what typically goes wrong” and has been used without a
precise definition throughout testing research. In literature, it
describes (typical) faults/failures in specific areas of testing
and is accompanied by adequate detection methods. Precisely
defining fault models and turning them operational would
enable (semi-)automatic derivation of good test cases.
I. P RELIMINARY H YPOTHESIS
A good test case detects a potential, or likely, fault with
good cost-effectiveness [2]. Using a testing methodology based
on generic fault models enables the description of classes of
faults/failures by a higher order mutation that captures realworld faults and that hence does not rely on the coupling
hypothesis. By means of operationalization, this mutation is
used for test case derivation instead of test case assessment,
thereby creating “good” test cases. Since the initial effort
for operationalization is high, a fault model lifecycle framework enables planning and controlling the employment of the
methodology.
IV. P ROPOSED R ESEARCH A PPROACH
The proposed approach consists of a formally defined testing methodology based on generic fault models [2] and a fault
model lifecycle framework encapsulating the methodology to
enable effort justification and integration into existing quality
assurance activities within organizations.
Let a behavior description (BD) be any kind of program,
system, requirements, architecture or problem description.
Formally, a generic fault model is (1) a transformation α from
correct to incorrect BDs and/or (2) a partition of the input
data space ϕ. The transformation α is a higher order mutation
and describes a class of faults. ϕ describes a failure-causing
strategy, which is either derived from α or an a-priori unknown
set of faults causing a described failure. An operationalization
takes α and creates ϕ to subsequently derive test cases or takes
ϕ and derives test cases without using α [2]. As an example,
consider a program as instance of a BD. In the first case, the
program is searched for all elements y in the co-domain of
α, which are the outcome of the mutation. If found, an input
space partition is created by ϕ such that all potentially failurecausing inputs executing the line of code of y are in one block
and all other inputs are in a different block. Test cases are only
selected from the first block as they target y and represent good
test cases. If the creation of such a partition is infeasible, only
II. I NTRODUCTION
One of the fundamental problems in software testing is
selecting a finite set of test cases to be executed. In partitionbased testing the input domain of a program is partitioned into
blocks and a number of inputs are (randomly) selected from
each block [2]. Random testing selects test cases from the
input domain in a (uniformly) random way with negligible
effort and thereby comprises the fundamental assessment
criterion for test selection. Any created test selection criterion
should be more effective than random testing, if a selection
effort is required. According to a model created by Weyuker
and Jeng more than 20 years ago [3], the effectiveness (i.e.
fault detection ability) of partition-based testing can be better,
worse or the same as random testing depending solely on the
partition. They conclude that the efficiency is maximized when
the input space partition captures the distribution of faults and
test inputs are only selected from failure-causing input blocks.
978-0-7695-5194-4/14 $31.00 © 2014 IEEE
DOI 10.1109/ICSTW.2014.44
233
235
V. E XPECTED C ONTRIBUTION
The contribution of this Ph.D thesis is a testing methodology and generation technology based on fault models and
an encompassing fault model lifecycle for integration into
organizational quality assurance activities. The testing methodology achieves a higher effectiveness than random testing by
capturing and operationalizing knowledge of what went wrong
in the past as well as results from literature. To focus efforts
on effective fault models, the methodology is encapsulated by
a fault model lifecycle framework guiding the introduction,
employment and controlling of fault model methodology in
testing as well as other quality assurance techniques (e.g.
reviews and inspections).
Fig. 1. Framework for quality assurance using fault models
a code smell is reported. If y is not found, the fault is mutated
into the program to verify the operationalization is able to
detect it. In the second case, the input space partition is created
by ϕ using an established and predefined failure-causing
strategy based on past failures. The layout of the created
partition and the test case selection strategy are equivalent to
the first case. Adequate tools for operationalization are model
checkers, symbolic execution or abstract interpretation [2] as
they are able to create an approximation of the desired input
space partition. Thus, the operationalization yields a test case
generator with smell reporting fallback.
In general, the proposed testing methodology is an instance
of fault-based testing [1] and aims at showing the absence
of the described class of faults or failures. It also provides a
stopping criterion, but does not aim to show any properties of
the tested program such as correctness.
To enable the creation and operationalization of effective
fault models and to justify the involved effort, planning and
controlling activities are required. While planning supports
decision-making about the expected effectiveness of a fault
model before its creation, controlling constantly compares
the expected and actual effectiveness after operationalization.
The proposed fault model lifecycle framework (see figure 1)
consists of planning (elicitation (1) and classification (2)), fault
model methodology (description (3) and operationalization
(4)) and controlling (assessment (5) and maintenance (6)).
During planning faults are collected (1) and organized (2)
w.r.t. the expected fault model effectiveness. In particular,
common and recurring faults described as fault models have
a high expected effectiveness such that classifying faults with
such a criterion appears reasonable.
During the employment of the methodology in quality assurance, the selected classes of faults/failures are described (3)
and an operationalization is created (4). The operationalization
yields a test case generator, which can be reused according to
its inherent type of BD, domain, test level and application.
To control the effectiveness, fault models are constantly
assessed (5) and maintained (6). The assessment is based on
the question whether the fault model describes the intended
class of faults and whether the test case generator generates
good test cases. Maintenance focuses on whether fault models
are still effective and what other faults/failures should possibly
be considered for the creation of further fault models.
There exist two feedback loops: One from assessment to
classification in case a mismatch was evaluated and one from
maintenance to description as well as elicitation in case fault
models need to be adjusted. Tailoring this framework allows
its integration into existing organizational quality assurance.
VI. S UMMARY OF RESULTS TO DATE
In [2], a generic fault model is defined constituting the
basis of the proposed methodology. Furthermore, this thesis is
performed in an industry project where I have already elicited
and described common and recurring faults of the industry
partner. Currently, I am developing a first operationalization
based on these fault models and plan to evaluate it shortly.
VII. E VALUATION
For the evaluation of this Ph.D thesis, the effectiveness in
terms of fault detection ability as well as efficiency in terms
of involved effort is to be evaluated. I want to perform a
case study comparing different existing test selection criteria
with the proposed fault model methodology. To evaluate the
effectiveness, I want to analyze how many faults were found
per method grouped by the class of faults. For the effort, I want
to compare the costs of the different techniques w.r.t. time
taken for introduction, finding a fault and controlling. Both
comparisons are to be performed multiple times in different
domains, on different test levels and with different applications
as I hypothesize the results to be essentially different when
changing one of these influencing factors.
VIII. C ONCLUSION
The aim of this Ph.D thesis is to define a test methodology
based on fault models and an encompassing fault model
lifecycle framework for practical integration into quality assurance. The fault model hypothesizes about the distribution
of faults in a BD. By operationalization, good test cases
are (semi-)automatically derived. The lifecycle framework
encapsulates the methodology enabling planning, employment
and controlling of the methodology. Thus, it can be applied in
organizational quality insurance along with other techniques.
R EFERENCES
[1] L. Morell, “A theory of fault-based testing,” IEEE Transactions on Software Engineering, 1990.
[2] A. Pretschner, D. Holling, R. Eschbach, and M. Gemmar,
“A generic fault model for quality assurance,” in ModelDriven Engineering Languages and Systems, 2013.
[3] E. J. Weyuker and B. Jeng, “Analyzing partition testing
strategies,” IEEE Trans. Softw. Eng., 1991.
236
234
Quality Assurance Strategy for Distributed Software
Development using Managed Test Lab Model
Anuradha Mathrani
School of Engineering and Advanced Technology
Massey University
Auckland, New Zealand
a.s.mathrani@massey.ac.nz
Abstract—Distributed software development is becoming the
norm as it is considered a more cost effective way of building
software. Organizations seek to reduce development time with
concurrent teams spread across geographical spaces as they
jointly collaborate in designing, building, and testing the evolving
software artefacts. However, the software evolution process is not
an easy task, and requires many iterations of testing for ongoing
verification and validation with some pre-determined design
quality criteria. This is further complicated in distributed teams
which interact over lean virtual platforms. This study
investigates a distributed environment spread across Japan,
India, and New Zealand to inform on how managed test lab
model (MTLM) is used to facilitate quality assurance practices
for testing of the evolving artefacts within the overall software
development lifecycle process. The paper describes the
operational aspects of using MTLM as an online framework in
which responsibilities are assigned, test scripts are executed, and
validation processes are formalized with appropriate use of
toolkits to coordinate allocated task breakdowns across the three
countries.
Keywords—test cases, test scripts, quality, distributed software
development, software evolution
I.
INTRODUCTION
Distributed software development involves partnerships by
software teams spread across inter- and intra- organizational
boundaries who work together over virtual communication
platforms. Accordingly, shared project workspaces are created
by managements to assist the partnering teams in operations, as
they jointly collaborate online to build evolving software
artefacts. The software evolution process is not an easy task, as
risks often relate to differing choices by teams in setting of
design standards, quality governance issues, or liaison
concerns, amongst others [1]. As software artefacts evolve, the
teams go over many iterative cycles of designing, testing, and
verification before the said artefact can move onto the next
granular stage of evolution. Quality processes in the product
evolution process entail ongoing interaction between the
company management and the developer teams during various
stages of development in the product lifecycle. Often the
interactions go beyond rules, agreements, and exceptions
specified in formal contracts between the partnering groups [2].
Further, in the case of distributed development, the interactions
are over virtual infrastructural arrangements which connect all
the stakeholders. Typically communications over virtual
platforms are leaner and less intuitive as compared to face-to978-1-4799-3312-9/14/$31.00 ©2014 IEEE
face office environments, since virtual platforms are
intrinsically text-based or at most may utilize VOIP media.
Software development is both a building and validating
process, where various building blocks are integrated into
software modules. These modules evolve with ongoing rework
as verification strategies are applied to uncover any hidden
defects in the current software build. During the testing or
verification stage, the product operations are checked against a
pre-determined criterion. Importantly, this criterion lays out the
quality measures for the said product, which are decided during
the requirement gathering phase by customers and
subsequently, during the design phase by the project
stakeholders. Thus, quality standards are not an afterthought
but are laid out early in the development process or, as and
when the customer identifies new requirements. The
requirements are translated into artefact deliverables and are
interwoven into the whole software lifecycle process. It is
essential that all stakeholders have an unambiguous
representation of the requirements, so that accurate
interpretations are made in capturing and setting of the right
quality standards [3]. This further lays out the effectiveness of
test case/suite development. The role of the testing team is to
validate whether the software build adheres to the laid out
quality specifications, and meets the end user expectations.
Thus, quality standards are not decided by the testing team,
rather the testers analyze the build to provide assurance on
quality and reliability status of the current build by confirming
if the build meets the pre-determined criteria.
Studies that synthesize research and practice on migration
strategies during software evolution process such as testing of
software product lines are limited. Current studies give a brief
indication of quality assurance (QA) and testing strategies,
which need further support or refute through empirical
assessment such as formal experiments and/or case studies [4].
Software evolution process requires an environment or a “test
lab” which has provisions for teams to review, verify, and
validate the evolving artefacts with the agreed standards. The
notion of “test lab” also sometimes referred to as a “sandbox”,
is to assist team members to confirm different aspects of the
artefact’s functionality using a variety of testing techniques
across various domains.
In this paper, case study methods have been employed to
investigate how a shared virtual platform is deployed by
development teams across three countries – Japan, India, and
New Zealand. We investigate how practitioners evaluate
software modules and facilitate quality assurance practices in
distributed software development using a managed test lab
model (MTLM) environment. In this era of new markets
where distributed software development is becoming the norm,
this paper illustrates how MTLM is used as a quality assurance
strategy in the software evolution process.
II.
CASE DESCRIPTION
This study investigates the case of a leading software
vendor referred as VNDR (pseudonym), having development
centers in Auckland (New Zealand), Melbourne (Australia),
and Hyderabad (India). They have undertaken many software
projects with clients in Australia, US, Japan, and New Zealand,
where they provide design, development, testing and
maintenance services. This study is specific to a client
organization CLNT (pseudonym) in Japan, who commissioned
VNDR for their QA services for a flagship application, that is,
its online invitation service (OIS). The OIS application offers
users an online system via internet and mobile devices to form
groups and manage events, sound alerts and send invitations
within or outside groups. To facilitate QA activities, VNDR
has set up a sandbox environment across the three countries for
both development and testing teams to build and validate the
evolving artefacts. The sandbox referred by VNDR as
Managed Test Lab Model has been customized with software
tools to allow online interactions between development teams
located in Japan, and testing teams located in India and New
Zealand. Further, the time difference between India (GMT +
5:30), Japan (GMT + 9:00), and New Zealand (GMT + 13:00),
means that all the three teams have a time overlap during the
day, so online communications over MTLM can occur at
reasonable office hours during weekdays.
The MTLM has a steering committee comprising of top
management from both organizations – CLNT and VNDR –
who oversee the whole operations within the MTLM
environment. Fig.1 displays the organizational structure of
MTLM. The steering committee defines joint work tasks and
Fig1. Steering committee structure in managed test lab model
responsibilities for the teams, which are conveyed through the
project manager, quality manager and test leaders.
III.
OPERATIONAL FRAMEWORK FOR MANAGED TEST LAB
MODEL ENVIRONMENT
Once the steering committee has defined work
responsibilities, expectations, and scope of the QA activities to
CLNT and VNDR teams, the MTLM operations can be
initiated. This is outlined in a formal document called the
responsibility matrix, which defines all resources to be utilized
(i.e., number of test engineers, testing techniques, skill sets),
time frames of work undertaken by client and vendor (i.e.,
completion of testing rounds during various release phases) …
Purchase answer to see full
attachment

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code ESSAYSHELP