Psych Assessment Test 2

Total Flash Cards » 72
Text Size » S M L     
 
1. 
Reliability
 
dependable, repeatable results – necessary but not sufficient for validity
 
2. 
Methods for Estimating a tests Reliability
 
1. Test-Retest Method 2. Parallel/Alternate Forms Method
 
3. 
Explain Test-Retest Method
 
a. Evaluates temporal stability – consistency over time. Administer test 2 or more times to the same group of individuals. b. Time 1 + Time 2 = Score
 
4. 
Advantages & Disadvantages of Test-Retest Method
 
c. Advantage – same test, same people d. Disadvantage – characteristic might change, time interval. e. Carryover Effects – being tested on one occasion can affect second occasion or may create desire to be consistent. Optimal time 2-3 weeks.
 
5. 
Explain Parallel/Alternate Forms Method
 
a. Similar to test/re-test, but form of test changes to an alternate form but still measures the same stuff in content. b. Compare Correlation between test A & B. c. High Correlation = Related
 
6. 
Define Internal Consistency
 
Extent to which items are interrelated in the cumulative. Do items “hang together”
 
7. 
Tests for measuring internal consistency
 
1. Split-Half Reliability 2. Coefficient Alpha (Cronbach) α or jesus fish
 
8. 
How to do split half reliability
 
a. Divide test if half and score both parts, do they correlate. b. Odd/Even Method – sum of #’s 1,3,5,7,9 etc. so order of items is not as important.
 
9. 
Advantages & Disadvantages of Split-Half Reliability
 
c. Advantages – use same items at the same time d. Disadvantage – different results based on how you split the times.
 
10. 
Define Coefficient Alpha (Cronbach) α or
 
a. Better measure than split half. b. Measures internal reliability c. Has all advantages as split half d. Depends on i. Average intercorrelation of items should be high, higher average intercorrelation = higher α ii. Number of items 0 éitmes = éreliable e. Desireable Test Results i. Should exceed .70 ii. Desireable to be above .90 iii. Self-Esteem Scale = .832
 
11. 
Errors of Measurement
 
unwanted variability
 
12. 
Classical Test Theory (True Score Theory)
 
Observed Score – true score + error (deviation) Errors are random +/- s2 = Ration of true score to true score variability True Score + Error
 
13. 
Standard Error of Measurement (SEM)
 
SEM is standard deviation of observed scores Exresses the expected error or margin of error “average error”
 
14. 
SEM to get 95% score
 
To get 95% score calc SEM * 1.96 & calc range
 
15. 
Item Reliability Defined
 
Used to build reliabilty into the test –analyses of individual items of test. -Goal – to produce reliable test which makes it more likely will have a have valid test. -Item Reliability - Does item contribute to the reliability of the test. “Hang Together” with other items. -Item total Correlation – score of item compared to score on other items.
 
16. 
Item Analysis
 
Choosing items that have good psychometric characteristics. -Items contributionto Reliability (Internal Consistency) -SPSS – Corrected Item Total Correlation -SPSS - Cronbach Alpha if deleted
 
17. 
Item Discrimination
 
(Individual Difference) - Do responses on that item discriminate between people that score high or low on the test.
 
18. 
Item Variance
 
chose questions where people differ and there is a great amount of variability.
 
19. 
Item Response Theory
 
Item Characteristc Curve figure on p. 78
 
20. 
Item Difficulty
 
The probability of a correct or passing answer or answer that particular item.
 
21. 
Validity Defined
 
Property of the test – does the test measure what it is supposed to measure. Measures a construct (idea such as intelligence). -Property of the test – does the test measure what it is supposed to measure. Measures a construct (idea such as intelligence).
 
22. 
What is most important part of test
 
Validity
 
23. 
3 Types of Validity Evidence
 
1. Content 2. Criterion 3. Construct
 
24. 
Content Validity
 
-The extent to which content adequately samples the universe of interest. -Does test cover a specific area adequately. -Reviewed by “expert” in the area “face validity” looks good. -Expert judgments based on informed understandings.
 
25. 
Criterion Validity Defined
 
-The ability of test to predict certain criteria that are directly relevant to the test. -Test 1 test against another. -Correlation – between test score & criterion score. 0-1, 1 means you are measuring the same thing. -Eg score by self compared to peer score – do they correlate -Eg SAT & grades in college
 
26. 
2 Types of Criterion Validity
 
-Concurrent – does criterion exist at the same point in time. eg test score & interview at the same time -Predictive – does it have the ability to predict same things at some future point. Eg memory test @ 55 prdicts alzhymers @ 75.
 
27. 
Criterion Validity - Concurrent
 
does criterion exist at the same point in time. eg test score & interview at the same time
 
28. 
Criterion Validity - Predictive
 
does it have the ability to predict same things at some future point. Eg memory test @ 55 prdicts alzhymers @ 75.
 
29. 
Construct Validity
 
-1955 paper (Cronbach & Muhl) -Includes all other validity types and more. -Relevant when attribute of interest exists as a construct (a theoretical concept/attribute of an individual eg intelligence – abstraction) -Theory Building & Theory Testing – Develop test to measure extent to which body of evidence supports what its measuring.
 
30. 
Nomological Net Defined
 
-system of laws that relate it to other constructs including some variables that are directly observed.
 
31. 
Convergent Validity
 
-Correlationwith another independent measure of the same trait. -Do scores on one test converge with scores on another test.
 
32. 
Discrimnat Validity
 
-Low correlations with another independent measure of a theoretically unrelated traited eg intelligence & friendliness.
 
33. 
MultiMethod Matrix
 
2 or more traits + 2 or more methods
 
34. 
Trait Variance
 
both measures extroversion
 
35. 
Method Variance
 
Both use self report
 
36. 
3 Basic Strategies of test construction
 
1. Correspondance or Rational View 2. Instrumental or Empirical View 3. Substantive or Construct View
 
37. 
Test Construction Correspondance or Rational View
 
a. Select items for test on basis of assumed relation between item & attribute to be measured. b. Based on judgment of individual constructing the test. c. Measures internal state of the individual “face validity” d. Woodworth Personality Data Sheet – developed to measure psychoneurotic tendencies – used for military.
 
38. 
Test Construction Correspondance or Rational View DISADVANTAGES
 
-Assumption that items have common meaning. -Assumption that respondants know their “inner states” -Respondents may not honestly report on their state. -Assumes that items are related to the trait
 
39. 
Test Construction Instrumental or Empirical View
 
-Use statistical criteria for selecting items – only uses data. b. Contrasted Groups Method i. Start with large number of items ii. Give items to 2 groups of people 1. Group 1 – criterion group – these people “have” the trait, eg they are shy. 2. Group 2 – control group – people that “don’t have the trait” iii. Compare answers, items that differentiate between the groups become part of the test. Eg MMPI was based on this method. iv. Totally based on how 2 groups responded to the items.
 
40. 
Test Construction Instrumental or Empirical View DISADVANTAGES
 
-Need to be able to reliably specify the “criterion group” -Items we srtart out with have to be a large number & braod to find a few that work. -When items are based on extreme groups, there’s no guarantee items will work on groups in the middle. -Don’t know the source of the difference, but don’t know why. Some other trait could be confounding. -Chance – possibility that you may commit Type I error or error of false positive.
 
41. 
Test Construction Substantive or Construct View
 
-Used when items we are measuring is a psychological construct. -Substantive Validity -Does test capture universe of interest -Does it have adequacey of coverage in proper proportion. -Is it a good model of the construct. -Does is have the right number of items for each feature.
 
42. 
Test Construction Substantive or Construct View DISADVANTAGES
 
-2 Problems -Construct Under Representation -Have only covered part of the domain. Test items don’t cover the domain of interest. Eg Beck Depression. -Construct Irrelevant Variance -Test items cover too much, includes irrelevant material. Test may be too braod.
 
43. 
Structural Validity
 
i. The items together demonstrate interal consistency, that is appropriate for that construct. Do items “hang together”, are the intercorrelated, but appropriate for that construct. ii. Inter-Item Structure/Correlation 1. Chose items that maximize the interla consistency of the test. SPSS “Item Correlation” or “Contribution to Alpha”
 
44. 
Structural Fidelity (trueness)
 
-Do the items intercorrelate in the same way as the construct is structured. Do items “mirror” the construct?
 
45. 
Combining Items
 
Cummulative model -When item scores are added together to get a total. -Appropriate when dimensional or continuous (shyness) Class Model -Assumes a categorical or qualitative attribute. -You either are or are not. No degree of membership. -Configuration of scores / meeting certain criteria. (DSM)
 
46. 
External Validity
 
-Concerns the relation of the test scores to criteria that we want to predict, aka validity. -Relation between test score & construct should be the same.
 
47. 
Response Sets & Response Styles
 
How you respond to test item is affected by all of extraneous factors that affect response.
 
48. 
Response Style (defensive, open)
 
-Consistent pattern that would affect how the person responds to many different tests. -An aspect of their personality. -Need for social approval may affect how they respond to a test. **
 
49. 
Response Set (self presentation)
 
-Strategy on a particular test influenced by the goal person has on the impression they want to make in that situation. -Malingerers – present impression as sick, in need of help.
 
50. 
Set vs. Style
 
-Set meets a need in that moment -Style is how the person generally reacts.
 
51. 
Set & Style In Common
 
-Represent Non-Substantive basis to the test respons. -Outside of “construct” that is influencing the results.
 
52. 
Acquiesence
 
Tendency to agree with item regardless of what’s being asked “agreement tenency”.
 
53. 
Problems & solutions w/ acquiesence
 
-With both, you don’t get an accurate measure of rait. -Happens when all items are phrased similarly (affirmative or negative). -Solution – Balanced test has both affirmative and negative
 
54. 
Social Desireability
 
Tendency to endorse items that describe yourself as “socially desireable”.
 
55. 
Solution for social desireability problem
 
EE Edwards – designed a test based on MMPI (he says MMPI is tainted by social desireability) -Developed scale w/ 3 items that had highest “social desireability”. Higher on scale, lower maladjustment???
 
56. 
Marlowe & Crown Social Desireabilty Scale
 
-Look at items we all have and see if people deny having them. -Ensure that test isn’t highly contaminated by social desireability.
 
57. 
Structured/Objective Tests
 
-Relatively unambiguous set of items. -Can respond with limited number of responses such as o True/False o Yes/No o Likert Scales (1-5)
 
58. 
Structure vs. Objective
 
Structure -Unambiguous -Direct Objective -Can score the test so that scoring is perfectly reliable.
 
59. 
Norman's "Big 5 Factors"
 
1.Surgency (Extroversion) Largest Factor oTalkative vs. Silent 2.Agreeableness oGood natured vs. irritable oCooperative vs. negativistic 3.Conscienciousness oResponsible vs. Undependable 4.Emotional Stability (Adjustment) Inverse: Neuroticism oCalm vs. Anxious 5.Culture (Intellectency) (Openness to Experience) oImaginative vs. Simple oConventional vs. Creative
 
60. 
adequate taxonomy
 
Norman’s big 5 factors are an adequate taxonomy of personality – doesn’t cover all aspects of all possioble ways. E.g. doesn’t cover level of spirituality.
 
61. 
Costa & McCrae’s Five Factor Model
 
1. Extroversion 2. Agreeableness 3. Conscintiousness 4. Neuroticism 5. Openness to Experience
 
62. 
Interpersonal “Social Aspect”
 
-Personality Traits that affect how you deal with and interact with others. -Subset of 5 factor model that seems to define interpersonal i. Extroversion ii.Agreeableness
 
63. 
10 Clinical Scales
 
Hs (1) - Hypochondriasis D (2) - Depression Hy (3) - Hysteria Pd (4) - Psychopathic deviate Mf (5) - Masculinity/Femininity Pa (6) - Paranoia Pt (7) - Psychasthenia Sc (8) - Schizophrenia Ma (9) - Hypomania Si (0) - Social
 
64. 
MMPI Validity Scales
 
measures test taking attitudes
 
65. 
List the 3 validity scales
 
L - Lie - underreport F - Frequency - overreport K - Correction - defensiveness
 
66. 
L Scale
 
-tendency to underreport -Naïve attempt to create overly positive impresssion of themselves. -Deny common truths
 
67. 
F Scale
 
-tendency to overreport -present oneself in overly negative lgith (sick, malingerer) -frequently endorses items endorsed by
 
68. 
K-Scale
 
-To measure defensiveness -More sophisticated than L. -Tested by psychiatric patients but endorsed themselves asx normal. False Normals. -Add .5% of K scale to account for defensiveness
 
69. 
Problems w/ MMPI
 
1) Passage of time has made some items obsolete 2) Objectionable content – sexist/racist items 3) Small sample not demographically representative.
 
70. 
Steps in Interpreting MMPI
 
1) Note test taking behaviors 2) Score & Plot the Profile 3) Evaluate validity of the results 4) Compare profiles against code types 5) Individuate 6) Write the report
 
71. 
MMPI Interpretation How to compare profile types
 
-Identify 2 highest scales & assign code type. -Have to be above 65 (clinical cutoff) -Within normal range -Spike = only1 scale above 65
 
72. 
MMPI Interpretation How to individuate
 
-Supplementary scales & other patterns in profile. -Critical items -Demographics