Expert answer:Gini Calculations

  

Solved by verified expert:A separate problem document and text book ppt has been attached. The document is purposely created in Microsoft Word so you can enter your answers into the document. The problems focus on the use of the Gini Index.YOUR ANSWERS MUST APPEAR WITHIN THE PROBLEM DOCUMENT.10% WILL BE DEDUCTED IF YOU CREATE A NEW OR SEPARATE DOCUMENT.10% WILL BE DEDUCTED IF YOU CREATE A “TITLE PAGE” TYPE OF DOCUMENT.20% WILL BE IF YOU DO NOT SHOW YOUR CALCULATIONS FOR EACH ANSWER.You must make your own calculations of the Gini Index and you must show your calculations in the answer document. Insufficient calculation steps will result in reduced points earned.
chap3_basic_classification.ppt

chapter_3_problems___gini_calculations.docx

Don't use plagiarized sources. Get Your Custom Essay on
Expert answer:Gini Calculations
Just from $10/Page
Order Essay

Unformatted Attachment Preview

Data Mining
Classification: Basic Concepts and
Techniques
Lecture Notes for Chapter 3
Introduction to Data Mining, 2nd Edition
by
Tan, Steinbach, Karpatne, Kumar
5/30/2019
Introduction to Data Mining, 2nd Edition
1
Classification: Definition
Given a collection of records (training set )
– Each record is by characterized by a tuple
(x,y), where x is the attribute set and y is the
class label
x: attribute, predictor, independent variable, input
◆ y: class, response, dependent variable, output

Task:
– Learn a model that maps each attribute set x
into one of the predefined class labels y
5/30/2019
Introduction to Data Mining, 2nd Edition
2
Examples of Classification Task
Task
Attribute set, x
Class label, y
Categorizing Features extracted from
email
email message header
messages
and content
spam or non-spam
Identifying
tumor cells
Features extracted from
MRI scans
malignant or benign
cells
Cataloging
galaxies
Features extracted from
telescope images
Elliptical, spiral, or
irregular-shaped
galaxies
5/30/2019
Introduction to Data Mining, 2nd Edition
3
General Approach for Building
Classification Model
Tid
Attrib1
Attrib2
Attrib3
Class
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Learning
algorithm
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
Attrib3
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
Apply
Model
Class
Deduction
10
Test Set
5/30/2019
Introduction to Data Mining, 2nd Edition
4
Classification Techniques
Base Classifiers
– Decision Tree based Methods
– Rule-based Methods
– Nearest-neighbor
– Neural Networks
– Deep Learning
– Naïve Bayes and Bayesian Belief Networks
– Support Vector Machines
Ensemble Classifiers
– Boosting, Bagging, Random Forests
5/30/2019
Introduction to Data Mining, 2nd Edition
5
Example of a Decision Tree
Splitting Attributes
Marital
Status
Annual Defaulted
Income Borrower
ID
Home
Owner
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
6
No
Married
7
Yes
Divorced 220K
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Home
Owner
Yes
Yes
No
NO
MarSt
Single, Divorced
Married
No
Income
No
< 80K NO NO > 80K
YES
10
Training Data
5/30/2019
Model: Decision Tree
Introduction to Data Mining, 2nd Edition
6
Another Example of Decision Tree
MarSt
ID
Home
Owner
1
Yes
Marital
Status
Single
Annual Defaulted
Income Borrower
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Married
NO
Yes
Single,
Divorced
Home
Owner
NO
No
Income
< 80K NO > 80K
YES
There could be more than one tree that
fits the same data!
10
5/30/2019
Introduction to Data Mining, 2nd Edition
7
Apply Model to Test Data
Test Data
Start from the root of tree.
Home
Owner
No
Home
Owner
Yes
NO
Marital
Status
Married
Annual Defaulted
Income Borrower
80K
?
10
No
MarSt
Single, Divorced
Income
< 80K NO 5/30/2019 Married NO > 80K
YES
Introduction to Data Mining, 2nd Edition
8
Apply Model to Test Data
Test Data
Home
Owner
No
Home
Owner
Yes
NO
Marital
Status
Married
Annual
Income
Defaulted
Borrower
80K
?
10
No
MarSt
Single, Divorced
Income
< 80K NO 5/30/2019 Married NO > 80K
YES
Introduction to Data Mining, 2nd Edition
9
Apply Model to Test Data
Test Data
Home
Owner
No
Home
Owner
Yes
NO
Marital
Status
Married
Annual Defaulted
Income Borrower
80K
?
10
No
MarSt
Single, Divorced
Income
< 80K NO 5/30/2019 Married NO > 80K
YES
Introduction to Data Mining, 2nd Edition
10
Apply Model to Test Data
Test Data
Home
Owner
No
Home
Owner
Yes
NO
Marital
Status
Married
Annual Defaulted
Income Borrower
80K
?
10
No
MarSt
Single, Divorced
Income
< 80K NO 5/30/2019 Married NO > 80K
YES
Introduction to Data Mining, 2nd Edition
11
Apply Model to Test Data
Test Data
Home
Owner
No
Home
Owner
Yes
NO
Marital
Status
Married
Annual
Income
Defaulted
Borrower
80K
?
10
No
MarSt
Single, Divorced
Income
< 80K NO 5/30/2019 Married NO > 80K
YES
Introduction to Data Mining, 2nd Edition
12
Apply Model to Test Data
Test Data
Home
Owner
No
Home
Owner
Yes
NO
Marital
Status
Married
Annual Defaulted
Income Borrower
80K
?
10
No
MarSt
Single, Divorced
Income
< 80K NO 5/30/2019 Married Assign Defaulted to “No” NO > 80K
YES
Introduction to Data Mining, 2nd Edition
13
Decision Tree Classification Task
Tid
Attrib1
Attrib2
Attrib3
Class
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Tree
Induction
algorithm
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
Attrib3
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
Apply
Model
Class
Decision
Tree
Deduction
10
Test Set
5/30/2019
Introduction to Data Mining, 2nd Edition
14
Decision Tree Induction
Many Algorithms:
– Hunt’s Algorithm (one of the earliest)
– CART
– ID3, C4.5
– SLIQ,SPRINT
5/30/2019
Introduction to Data Mining, 2nd Edition
15
General Structure of Hunt’s Algorithm
l
l
Let Dt be the set of training
records that reach a node t
General Procedure:
– If Dt contains records that
belong the same class yt,
then t is a leaf node
labeled as yt
– If Dt contains records that
belong to more than one
class, use an attribute test
to split the data into smaller
subsets. Recursively apply
the procedure to each
subset.
5/30/2019
ID
Home
Owner
Marital
Status
Annual Defaulted
Income Borrower
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
10
Introduction to Data Mining, 2nd Edition
Dt
?
16
Hunt’s Algorithm
Home
Owner
Yes
No
Defaulted = No
Defaulted = No
(7,3)
Defaulted = No
(3,0)
(4,3)
(a)
(b)
Home
Owner
Yes
Home
Owner
Yes
(3,0)
Defaulted = No
No
(3,0)
Marital
Status
Defaulted = No
Single,
Divorced
Defaulted = Yes
(1,3)
Defaulted = No
5/30/2019
Marital
Status
Single,
Divorced
Home
Owner
Marital
Status
Annual Defaulted
Income Borrower
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
10
Married
Defaulted = No
Annual
Income
Married
(3,0)
(c)
No
ID
< 80K (3,0) >= 80K
Defaulted = No
Defaulted = Yes
(1,0)
(0,3)
(d)
Introduction to Data Mining, 2nd Edition
17
Hunt’s Algorithm
Home
Owner
Yes
No
Defaulted = No
Defaulted = No
(7,3)
Defaulted = No
(3,0)
(4,3)
(a)
(b)
Home
Owner
Yes
Home
Owner
Yes
(3,0)
Defaulted = No
No
(3,0)
Marital
Status
Defaulted = No
Single,
Divorced
Defaulted = Yes
(1,3)
Defaulted = No
5/30/2019
Marital
Status
Single,
Divorced
Home
Owner
Marital
Status
Annual Defaulted
Income Borrower
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
10
Married
Defaulted = No
Annual
Income
Married
(3,0)
(c)
No
ID
< 80K (3,0) >= 80K
Defaulted = No
Defaulted = Yes
(1,0)
(0,3)
(d)
Introduction to Data Mining, 2nd Edition
18
Hunt’s Algorithm
Home
Owner
Yes
No
Defaulted = No
Defaulted = No
(7,3)
Defaulted = No
(3,0)
(4,3)
(a)
(b)
Home
Owner
Yes
Home
Owner
Yes
(3,0)
Defaulted = No
No
(3,0)
Marital
Status
Defaulted = No
Single,
Divorced
Defaulted = Yes
(1,3)
Defaulted = No
5/30/2019
Marital
Status
Single,
Divorced
Home
Owner
Marital
Status
Annual Defaulted
Income Borrower
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
10
Married
Defaulted = No
Annual
Income
Married
(3,0)
(c)
No
ID
< 80K (3,0) >= 80K
Defaulted = No
Defaulted = Yes
(1,0)
(0,3)
(d)
Introduction to Data Mining, 2nd Edition
19
Hunt’s Algorithm
Home
Owner
Yes
No
Defaulted = No
Defaulted = No
(7,3)
Defaulted = No
(3,0)
(4,3)
(a)
(b)
Home
Owner
Yes
Home
Owner
Yes
(3,0)
Defaulted = No
No
(3,0)
Marital
Status
Defaulted = No
Single,
Divorced
Defaulted = Yes
(1,3)
Defaulted = No
5/30/2019
Marital
Status
Single,
Divorced
Home
Owner
Marital
Status
Annual Defaulted
Income Borrower
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
10
Married
Defaulted = No
Annual
Income
Married
(3,0)
(c)
No
ID
< 80K (3,0) >= 80K
Defaulted = No
Defaulted = Yes
(1,0)
(0,3)
(d)
Introduction to Data Mining, 2nd Edition
20
Design Issues of Decision Tree Induction
l
How should training records be split?
– Method for specifying test condition

depending on attribute types
– Measure for evaluating the goodness of a test
condition
l
How should the splitting procedure stop?
– Stop splitting if all the records belong to the
same class or have identical attribute values
– Early termination
5/30/2019
Introduction to Data Mining, 2nd Edition
21
Methods for Expressing Test Conditions
l
Depends on attribute types
– Binary
– Nominal
– Ordinal
– Continuous
l
Depends on number of ways to split
– 2-way split
– Multi-way split
5/30/2019
Introduction to Data Mining, 2nd Edition
22
Test Condition for Nominal Attributes
Multi-way split:
– Use as many partitions as
distinct values.
Marital
Status
Single
Divorced
Married
Binary split:
– Divides values into two subsets
Marital
Status
Marital
Status
OR
OR
{Married}
5/30/2019
{Single,
Divorced}
Marital
Status
{Single}
Introduction to Data Mining, 2nd Edition
{Married,
Divorced}
{Single,
Married}
{Divorced}
23
Test Condition for Ordinal Attributes
l
Multi-way split:
– Use as many partitions
as distinct values
Shirt
Size
Small
Medium
l
Binary split:
– Divides values into two
subsets
– Preserve order
property among
attribute values
Shirt
Size
{Small,
Medium}
Extra Large
Shirt
Size
{Small} {Medium, Large,
Extra Large}
Shirt
Size
This grouping
violates order
property
{Small,
Large}
5/30/2019
{Large,
Extra Large}
Large
{Medium,
Extra Large}
Introduction to Data Mining, 2nd Edition
24
Test Condition for Continuous Attributes
Annual
Income
> 80K?
Annual
Income?
< 10K Yes > 80K
No
[10K,25K)
(i) Binary split
5/30/2019
[25K,50K)
[50K,80K)
(ii) Multi-way split
Introduction to Data Mining, 2nd Edition
25
Splitting Based on Continuous Attributes
Different ways of handling
– Discretization to form an ordinal categorical
attribute
Ranges can be found by equal interval bucketing,
equal frequency bucketing (percentiles), or
clustering.
◆ Static – discretize once at the beginning
◆ Dynamic – repeat at each node
– Binary Decision: (A < v) or (A  v) consider all possible splits and finds the best cut ◆ can be more compute intensive ◆ 5/30/2019 Introduction to Data Mining, 2nd Edition 26 How to determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Car Type Gender Yes No Family Customer ID Luxury c1 Sports C0: 6 C1: 4 C0: 4 C1: 6 C0: 1 C1: 3 C0: 8 C1: 0 C0: 1 C1: 7 C0: 1 C1: 0 ... c10 C0: 1 C1: 0 c11 C0: 0 C1: 1 c20 ... C0: 0 C1: 1 Which test condition is the best? 5/30/2019 Introduction to Data Mining, 2nd Edition 27 How to determine the Best Split l Greedy approach: – Nodes with purer class distribution are preferred l Need a measure of node impurity: C0: 5 C1: 5 C0: 9 C1: 1 High degree of impurity 5/30/2019 Low degree of impurity Introduction to Data Mining, 2nd Edition 28 Measures of Node Impurity l Gini Index GINI (t ) = 1 −  [ p( j | t )]2 j l Entropy Entropy(t ) = − p( j | t ) log p( j | t ) j l Misclassification error Error (t ) = 1 − max P(i | t ) i 5/30/2019 Introduction to Data Mining, 2nd Edition 29 Finding the Best Split 1. 2. Compute impurity measure (P) before splitting Compute impurity measure (M) after splitting l l 3. Compute impurity measure of each child node M is the weighted impurity of children Choose the attribute test condition that produces the highest gain Gain = P – M or equivalently, lowest impurity measure after splitting (M) 5/30/2019 Introduction to Data Mining, 2nd Edition 30 Finding the Best Split Before Splitting: C0 C1 N00 N01 P A? B? Yes No Node N1 C0 C1 Yes Node N2 N10 N11 C0 C1 Node N3 N20 N21 C0 C1 M12 M11 No C0 C1 N30 N31 M21 M1 N40 N41 M22 M2 Gain = P – M1 5/30/2019 Node N4 vs P – M2 Introduction to Data Mining, 2nd Edition 31 Measure of Impurity: GINI Gini Index for a given node t : GINI (t ) = 1 −  [ p( j | t )]2 j (NOTE: p( j | t) is the relative frequency of class j at node t). – Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information – Minimum (0.0) when all records belong to one class, implying most interesting information 5/30/2019 Introduction to Data Mining, 2nd Edition 32 Measure of Impurity: GINI Gini Index for a given node t : GINI (t ) = 1 −  [ p( j | t )]2 j (NOTE: p( j | t) is the relative frequency of class j at node t). – For 2-class problem (p, 1 – p): ◆ GINI = 1 – p2 – (1 – p)2 = 2p (1-p) C1 C2 0 6 Gini=0.000 5/30/2019 C1 C2 1 5 Gini=0.278 C1 C2 2 4 Gini=0.444 Introduction to Data Mining, 2nd Edition C1 C2 3 3 Gini=0.500 33 Computing Gini Index of a Single Node GINI (t ) = 1 −  [ p( j | t )]2 j C1 C2 0 6 P(C1) = 0/6 = 0 C1 C2 1 5 P(C1) = 1/6 C1 C2 2 4 P(C1) = 2/6 5/30/2019 P(C2) = 6/6 = 1 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0 P(C2) = 5/6 Gini = 1 – (1/6)2 – (5/6)2 = 0.278 P(C2) = 4/6 Gini = 1 – (2/6)2 – (4/6)2 = 0.444 Introduction to Data Mining, 2nd Edition 34 Computing Gini Index for a Collection of Nodes l When a node p is split into k partitions (children) k ni GINI split =  GINI (i ) i =1 n where, ni = number of records at child i, n = number of records at parent node p. l Choose the attribute that minimizes weighted average Gini index of the children l Gini index is used in decision tree algorithms such as CART, SLIQ, SPRINT 5/30/2019 Introduction to Data Mining, 2nd Edition 35 Binary Attributes: Computing GINI Index Splits into two partitions Effect of Weighing partitions: – Larger and Purer Partitions are sought for. Parent B? Yes Gini(N1) = 1 – (5/6)2 – (1/6)2 = 0.278 Gini(N2) = 1 – (2/6)2 – (4/6)2 = 0.444 5/30/2019 No Node N1 C1 C2 N2 2 4 Gini=0.361 7 C2 5 Gini = 0.486 Node N2 N1 5 1 C1 Weighted Gini of N1 N2 = 6/12 * 0.278 + 6/12 * 0.444 = 0.361 Gain = 0.486 – 0.361 = 0.125 Introduction to Data Mining, 2nd Edition 36 Categorical Attributes: Computing Gini Index l l For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions Multi-way split Two-way split (find best partition of values) CarType Family Sports Luxury C1 C2 1 3 Gini 8 0 0.163 1 7 C1 C2 Gini CarType {Sports, {Family} Luxury} 9 1 7 3 0.468 C1 C2 Gini CarType {Family, {Sports} Luxury} 8 2 0 10 0.167 Which of these is the best? 5/30/2019 Introduction to Data Mining, 2nd Edition 37 Continuous Attributes: Computing Gini Index l l l l Use Binary Decisions based on one value Several Choices for the splitting value – Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it – Class counts in each of the partitions, A < v and A  v Simple method to choose best v – For each v, scan the database to gather count matrix and compute its Gini index – Computationally Inefficient! Repetition of work. 5/30/2019 ID Home Owner Marital Status Annual Defaulted Income 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 60K 10 Annual Income ? ≤ 80 > 80
Defaulted Yes
0
3
Defaulted No
3
4
Introduction to Data Mining, 2nd Edition
38
Continuous Attributes: Computing Gini Index…
l
For efficient computation: for each attribute,
– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix
and computing gini index
– Choose the split position that has the least gini index
Cheat
No
No
No
Yes
Yes
Yes
No
No
No
No
100
120
125
220
Annual Income
Sorted Values
60
Split Positions
55
75
65
85
72
90
80
95
87
92
97
110
122
172
230
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
Yes
0
3
0
3
0
3
0
3
1
2
2
1
3
0
3
0
3
0
3
0
3
0
No
0
7
1
6
2
5
3
4
3
4
3
4
3
4
4
3
5
2
6
1
7
0
Gini
5/30/2019
70
0.420
0.400
0.375
0.343
0.417
0.400
0.300
Introduction to Data Mining, 2nd Edition
0.343
0.375
0.400
0.420
39
Continuous Attributes: Computing Gini Index…
l
For efficient computation: for each attribute,
– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix
and computing gini index
– Choose the split position that has the least gini index
Cheat
No
No
No
Yes
Yes
Yes
No
No
No
No
100
120
125
220
Annual Income
Sorted Values
60
Split Positions
55
75
65
85
72
90
80
95
87
92
97
110
122
172
230
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
Yes
0
3
0
3
0
3
0
3
1
2
2
1
3
0
3
0
3
0
3
0
3
0
No
0
7
1
6
2
5
3
4
3
4
3
4
3
4
4
3
5
2
6
1
7
0
Gini
5/30/2019
70
0.420
0.400
0.375
0.343
0.417
0.400
0.300
Introduction to Data Mining, 2nd Edition
0.343
0.375
0.400
0.420
40
Continuous Attributes: Computing Gini Index…
l
For efficient computation: for each attribute,
– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix
and computing gini index
– Choose the split position that has the least gini index
Cheat
No
No
No
Yes
Yes
Yes
No
No
No
No
100
120
125
220
Annual Income
Sorted Values
60
Split Positions
55
75
65
85
72
90
80
95
87
92
97
110
122
172
230
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
<= >
Yes
0
3
0
3
0
3
0
3
1
2
2
1
3
0
3
0
3
0
3
0
3
0
No
0
7
1
6
2
5
3
4
3
4
3
4
3
4
4
3
5
2
6
1
7
0
Gini
5/30/2019
70
0.420
0.400
0.375
0.343
0.417
0.400
0.300
Introduction to Data Mining, 2nd Edition
0.343
0.375
0.400
0.420
41
Continuous Attributes: Computing Gini Index…
l
For efficient computation: for each attribute,
– Sort the attr …
Purchase answer to see full
attachment

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Order your essay today and save 30% with the discount code ESSAYSHELP