We are searching data for your request:

**Forums and discussions:**

**Manuals and reference books:**

**Data from registers:**

**Wait the end of the search in all databases.**

Upon completion, a link will appear to access the found materials.

Upon completion, a link will appear to access the found materials.

How do you report F statistics in APA format? I am working on PHD and am not certain that I am reporting the F statistics in the proper manner.

I have been reporting it as F (_,_). The second space I am unclear as to what number gets placed there.

The F ratio statistic has a numerator and denominator degrees of freedom. Thus, you report:

`F (numerator_df, denominator_df) = F_value, p =… , effect size =…`

The numerator degrees of freedom relates to the factor of interest; the denominator degrees of freedom corresponds to the degrees of freedom for the error variance.

The exact way that these degrees of freedom are calculated depends on the statistical test you are using. Standard textbooks will describe these approaches. For example, in a standard one-way between subjects ANOVA with $k$ groups and $n$ participants per group, you would have $k-1$ numerator degrees of freedom and $kn-k$.

Generally, you will be able to read off these numbers from the output of your statistics package.

### Example

Here is an example with k=3 groups and n=5 participants. See the df in the "group" row and the "error" row.

`F(2, 12) = 24.667, p < .001.`

The numbers inside the parentheses are the degrees of freedom for the F-statistic.

The second number is the within-group degrees of freedom. When you have the same number of subjects in all conditions, then the second number will be the number of subjects - the number of cells (conditions) in your design.

## How to Write an APA Research Paper

An APA-style paper includes the following sections: title page, abstract, introduction, method, results, discussion, and references. Your paper may also include one or more tables and/or figures. Different types of information about your study are addressed in each of the sections, as described below.

##### General formatting rules are as follows:

Do not put page breaks in between the introduction, method, results, and discussion sections.

The title page, abstract, references, table(s), and figure(s) should be on their own pages.

The entire paper should be written in the past tense, in a 12-point font, double-spaced, and with one-inch margins all around.

##### Title page

(see sample on p. 41 of APA manual)

- Title should be between 10-12 words and should reflect content of paper (e.g., IV and DV).
- Title, your name, and Hamilton College are all double-spaced (no extra spaces)
- Create a page header using the &ldquoView header&rdquo function in MS Word. On the title page, the header should include the following:

**Flush left: Running head:** THE RUNNING HEAD SHOULD BE IN ALL CAPITAL LETTERS. The running head is a short title that appears at the top of pages of published articles. It should not exceed 50 characters, including punctuation and spacing. (Note: on the title page, you actually write the words &ldquoRunning head,&rdquo but these words do not appear on subsequent pages just the actual running head does. If you make a section break between the title page and the rest of the paper you can make the header different for those two parts of the manuscript).

##### Abstract (labeled, centered, not bold)

No more than 120 words, one paragraph, block format (i.e., don&rsquot indent), double-spaced.

##### Introduction

(Do not label as &ldquoIntroduction.&rdquo Title of paper goes at the top of the page&mdashnot bold)

The introduction of an APA-style paper is the most difficult to write. A good introduction will summarize, integrate, and critically evaluate the empirical knowledge in the relevant area(s) in a way that sets the stage for your study and why you conducted it. The introduction starts out broad (but not too broad!) and gets more focused toward the end. Here are some guidelines for constructing a good introduction:

- Don&rsquot put your readers to sleep by beginning your paper with the time-worn sentence, &ldquoPast research has shown (blah blah blah)&rdquo They&rsquoll be snoring within a paragraph! Try to draw your reader in by saying something interesting or thought-provoking right off the bat. Take a look at articles you&rsquove read. Which ones captured your attention right away? How did the authors accomplish this task? Which ones didn&rsquot? Why not? See if you can use articles you liked as a model. One way to begin (but not the only way) is to provide an example or anecdote illustrative of your topic area.
- Although you won&rsquot go into the details of your study and hypotheses until the end of the intro, you should foreshadow your study a bit at the end of the first paragraph by stating your purpose briefly, to give your reader a schema for all the information you will present next.
- Your intro should be a logical flow of ideas that leads up to your hypothesis. Try to organize it in terms of the
*ideas*rather than who did what when. In other words, your intro shouldn&rsquot read like a story of &ldquoSchmirdley did such-and-such in 1991. Then Gurglehoff did something-or-other in 1993. Then. (etc.)&rdquo First, brainstorm all of the ideas you think are necessary to include in your paper. Next, decide which ideas make sense to present first, second, third, and so forth, and think about how you want to transition between ideas. When an idea is complex, don&rsquot be afraid to use a real-life example to clarify it for your reader. The introduction will end with a brief overview of your study and, finally, your specific hypotheses. The hypotheses should flow logically out of everything that&rsquos been presented, so that the reader has the sense of, &ldquoOf course. This hypothesis makes complete sense, given all the other research that was presented.&rdquo - When incorporating references into your intro, you do not necessarily need to describe every single study in complete detail, particularly if different studies use similar methodologies. Certainly you want to summarize briefly key articles, though, and point out differences in methods or findings of relevant studies when necessary. Don&rsquot make one mistake typical of a novice APA-paper writer by stating overtly why you&rsquore including a particular article (e.g., &ldquoThis article is relevant to my study because&hellip&rdquo). It should be obvious to the reader why you&rsquore including a reference without your explicitly saying so. DO NOT quote from the articles, instead paraphrase by putting the information in your own words.
- Be careful about citing your sources (see APA manual). Make sure there is a one-to-one correspondence between the articles you&rsquove cited in your intro and the articles listed in your reference section.
- Remember that your audience is the broader scientific community, not the other students in your class or your professor. Therefore, you should assume they have a basic understanding of psychology, but you need to provide them with the complete information necessary for them to understand the research you are presenting.

##### Method (labeled, centered, bold)

The Method section of an APA-style paper is the most straightforward to write, but requires precision. Your goal is to describe the details of your study in such a way that another researcher could duplicate your methods exactly.

The Method section typically includes Participants, Materials and/or Apparatus, and Procedure sections. If the design is particularly complicated (multiple IVs in a factorial experiment, for example), you might also include a separate Design subsection or have a &ldquoDesign and Procedure&rdquo section.

Note that in some studies (e.g., questionnaire studies in which there are many measures to describe but the procedure is brief), it may be more useful to present the Procedure section prior to the Materials section rather than after it.

##### Participants (labeled, flush left, bold)

Total number of participants (# women, # men), age range, mean and SD for age, racial/ethnic composition (if applicable), population type (e.g., college students). Remember to write numbers out when they begin a sentence.

- How were the participants recruited? (Don&rsquot say &ldquorandomly&rdquo if it wasn&rsquot random!) Were they compensated for their time in any way? (e.g., money, extra credit points)
- Write for a broad audience. Thus, do not write, &ldquoStudents in Psych. 280. &rdquo Rather, write (for instance), &ldquoStudents in a psychological statistics and research methods course at a small liberal arts college&hellip.&rdquo
- Try to avoid short, choppy sentences. Combine information into a longer sentence when possible.

##### Materials (labeled, flush left, bold)

Carefully describe any stimuli, questionnaires, and so forth. It is unnecessary to mention things such as the paper and pencil used to record the responses, the data recording sheet, the computer that ran the data analysis, the color of the computer, and so forth.

- If you included a questionnaire, you should describe it in detail. For instance, note how many items were on the questionnaire, what the response format was (e.g., a 5-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree)), how many items were reverse-scored, whether the measure had subscales, and so forth. Provide a sample item or two for your reader.
- If you have created a new instrument, you should attach it as an Appendix.
- If you presented participants with various word lists to remember or stimuli to judge, you should describe those in detail here. Use subheadings to separate different types of stimuli if needed. If you are only describing questionnaires, you may call this section &ldquoMeasures.&rdquo

##### Apparatus (labeled, flush left, bold)

Include an apparatus section if you used specialized equipment for your study (e.g., the eye tracking machine) and need to describe it in detail.

##### Procedure (labeled, flush left, bold)

What did participants do, and in what order? When you list a control variable (e.g., &ldquoParticipants all sat two feet from the experimenter.&rdquo), explain WHY you did what you did. In other words, what nuisance variable were you controlling for? Your procedure should be as brief and concise as possible. Read through it. Did you repeat yourself anywhere? If so, how can you rearrange things to avoid redundancy? You may either write the instructions to the participants verbatim or paraphrase, whichever you deem more appropriate. Don&rsquot forget to include brief statements about informed consent and debriefing.

##### Results (labeled, centered, bold)

In this section, describe how you analyzed the data and what you found. If your data analyses were complex, feel free to break this section down into labeled subsections, perhaps one section for each hypothesis.

- Include a section for descriptive statistics
- List what type of analysis or test you conducted to test each hypothesis.
- Refer to your Statistics textbook for the proper way to report results in APA style. A t-test, for example, is reported in the following format: t (18) = 3.57, p < .001, where 18 is the number of degrees of freedom (N &ndash 2 for an independent-groups t test). For a correlation: r (32) = -.52, p < .001, where 32 is the number of degrees of freedom (N &ndash 2 for a correlation). For a one-way ANOVA: F (2, 18) = 7.00, p < .001, where 2 represents the between and 18 represents df within Remember that if a finding has a p value greater than .05, it is &ldquononsignificant,&rdquo not &ldquoinsignificant.&rdquo For nonsignificant findings, still provide the exact p values. For correlations, be sure to report the r 2 value as an assessment of the strength of the finding, to show what proportion of variability is shared by the two variables you&rsquore correlating. For t- tests and ANOVAs, report eta 2 .
- Report exact p values to two or three decimal places (e.g., p = .042 see p. 114 of APA manual). However, for p-values less than .001, simply put p < .001.
- Following the presentation of all the statistics and numbers, be sure to state the nature of your finding(s) in words and whether or not they support your hypothesis (e.g., &ldquoAs predicted &hellip&rdquo). This information can typically be presented in a sentence or two following the numbers (within the same paragraph). Also, be sure to include the relevant means and SDs.
- It may be useful to include a table or figure to represent your results visually. Be sure to refer to these in your paper (e.g., &ldquoAs illustrated in Figure 1&hellip&rdquo). Remember that you may present a set of findings either as a table or as a figure, but not as both. Make sure that your text is not redundant with your tables/figures. For instance, if you present a table of means and standard deviations, you do not need to also report these in the text. However, if you use a figure to represent your results, you may wish to report means and standard deviations in the text, as these may not always be precisely ascertained by examining the figure. Do describe the trends shown in the figure.
- Do not spend any time interpreting or explaining the results save that for the Discussion section.

##### Discussion (labeled, centered, bold)

The goal of the discussion section is to interpret your findings and place them in the broader context of the literature in the area. A discussion section is like the reverse of the introduction, in that you begin with the specifics and work toward the more general (funnel out). Some points to consider:

- Begin with a brief restatement of your main findings (using words, not numbers). Did they support the hypothesis or not? If not, why not, do you think? Were there any surprising or interesting findings? How do your findings tie into the existing literature on the topic, or extend previous research? What do the results say about the broader behavior under investigation? Bring back some of the literature you discussed in the Introduction, and show how your results fit in (or don&rsquot fit in, as the case may be). If you have surprising findings, you might discuss other theories that can help to explain the findings. Begin with the assumption that your results are valid, and explain why they might differ from others in the literature.
- What are the limitations of the study? If your findings differ from those of other researchers, or if you did not get statistically significant results, don&rsquot spend pages and pages detailing what might have gone wrong with your study, but do provide one or two suggestions. Perhaps these could be incorporated into the future research section, below.
- What additional questions were generated from this study? What further research should be conducted on the topic? What gaps are there in the current body of research? Whenever you present an idea for a future research study, be sure to explain
*why*you think that particular study should be conducted. What new knowledge would be gained from it? Don&rsquot just say, &ldquoI think it would be interesting to re-run the study on a different college campus&rdquo or &ldquoIt would be better to run the study again with more participants.&rdquo Really put some thought into what extensions of the research might be interesting/informative, and why. - What are the theoretical and/or practical implications of your findings? How do these results relate to larger issues of human thoughts, feelings, and behavior? Give your readers &ldquothe big picture.&rdquo Try to answer the question, &ldquoSo what?

**Final paragraph:** Be sure to sum up your paper with a final concluding statement. Don&rsquot just trail off with an idea for a future study. End on a positive note by reminding your reader why your study was important and what it added to the literature.

##### References (labeled, centered, not bold)

Provide an alphabetical listing of the references (alphabetize by last name of first author). Double-space all, with no extra spaces between references. The second line of each reference should be indented (this is called a hanging indent and is easily accomplished using the ruler in Microsoft Word). See the APA manual for how to format references correctly.

Examples of references to journal articles start on p. 198 of the manual, and examples of references to books and book chapters start on pp. 202. Digital object identifiers (DOIs) are now included for electronic sources (see pp. 187-192 of APA manual to learn more).

**Journal article example:**

[Note that only the first letter of the first word of the article title is capitalized the journal name and volume are italicized. If the journal name had multiple words, each of the major words would be capitalized.]

Ebner-Priemer, U. W., & Trull, T. J. (2009). Ecological momentary assessment of mood disorders and mood dysregulation. Psychological Assessment, 21, 463-475. doi:10.1037/a0017075

**Book chapter example:**

[Note that only the first letter of the first word of both the chapter title and book title are capitalized.]

Stephan, W. G. (1985). Intergroup relations. In G. Lindzey & E. Aronson (Eds.), The handbook of social psychology (3 rd ed., Vol. 2, pp. 599-658). New York: Random House.

**Book example:**

Gray, P. (2010). Psychology (6 th ed.). New York: Worth

**Table**

There are various formats for tables, depending upon the information you wish to include. See the APA manual. Be sure to provide a table number and table title (the latter is italicized). Tables can be single or double-spaced.

**Figure**

If you have more than one figure, each one gets its own page. Use a sans serif font, such as Helvetica, for any text within your figure. Be sure to label your x- and y-axes clearly, and make sure you&rsquove noted the units of measurement of the DV. Underneath the figure provide a label and brief caption (e.g., &ldquoFigure 1. Mean evaluation of job applicant qualifications as a function of applicant attractiveness level&rdquo). The figure caption typically includes the IVs/predictor variables and the DV. Include error bars in your bar graphs, and note what the bars represent in the figure caption: Error bars represent one standard error above and below the mean.

**In-Text Citations:**

(see pp. 174-179 of APA manual)

When citing sources in your paper, you need to include the authors&rsquo names and publication date.

You should use the following formats:

- When including the citation as part of the sentence, use AND: &ldquoAccording to Jones and Smith (2003), the&hellip&rdquo
- When the citation appears in parentheses, use &ldquo&&rdquo: &ldquoStudies have shown that priming can affect actual motor behavior (Jones & Smith, 2003 Klein, Bailey, & Hammer, 1999).&rdquo The studies appearing in parentheses should be ordered alphabetically by the first author&rsquos last name, and should be separated by semicolons.
- If you are quoting directly (which you should avoid), you also need to include the page number.
- For sources with three or more authors, once you have listed all the authors&rsquo names, you may write &ldquoet al.&rdquo on subsequent mentions. For example: &ldquoKlein et al. (1999) found that&hellip.&rdquo For sources with two authors, both authors must be included every time the source is cited. When a source has six or more authors, the first author&rsquos last name and &ldquoet al.&rdquo are used every time the source is cited (including the first time).

##### Secondary Sources

&ldquoSecondary source&rdquo is the term used to describe material that is cited in another source. If in his article entitled &ldquoBehavioral Study of Obedience&rdquo (1963), Stanley Milgram makes reference to the ideas of Snow (presented above), Snow (1961) is the primary source, and Milgram (1963) is the secondary source.

Try to avoid using secondary sources in your papers in other words, try to find the primary source and read it before citing it in your own work. If you must use a secondary source, however, you should cite it in the following way:

Snow (as cited in Milgram, 1963) argued that, historically, the cause of most criminal acts.

The reference for the Milgram article (but not the Snow reference) should then appear in the reference list at the end of your paper.

## Multiple Comparisons

We can follow up these significant ANOVAs with Tukey's HSD post-hoc tests, as shown below in the **Multiple Comparisons** table:

Published with written permission from SPSS Statistics, IBM Corporation.

The table above shows that for mean scores for English were statistically significantly different between School A and School B (*p* < .0005), and School A and School C (*p* < .0005), but not between School B and School C (*p* = .897). Mean maths scores were statistically significantly different between School A and School C (*p* < .0005), and School B and School C (*p* = .001), but not between School A and School B (*p* = .443). These differences can be easily visualised by the plots generated by this procedure, as shown below:

Published with written permission from SPSS Statistics, IBM Corporation.

Published with written permission from SPSS Statistics, IBM Corporation

In our enhanced one-way MANOVA guide, we show you how to write up the results from your assumptions tests, one-way MANOVA and Tukey post-hoc results if you need to report this in a dissertation/thesis, assignment or research report. We do this using the Harvard and APA styles. You can learn more about our enhanced content on our Features: **Overview** page.

## Recognizing Components of an APA-Style Statistical Report

The American Psychological Association (APA) has specific requirements about how statistical results are reported. These generally do not vary much—you usually have an indication of what test was used, the degrees of freedom associated with the test, the actual value of the test statistic, the *p*-value, and an appropriate measure of effect size. In this blog, I will walk you through recognizing components of an APA-style statistical report so that you will feel more confident reading, interpreting, and writing statistical reports.

Here is an example of a statistical result using an *F*-test:

*F*(2,34) = 2.51, *p* = .003, η 2 = .04

Get Your Dissertation Approved

We work with graduate students every day and know what it takes to get your research approved.

- Address committee feedback
- Roadmap to completion
- Understand your needs and timeframe

The first part indicates the test used—in this case, the *F-*test. Other common tests include the chi-square ( 2 ) and *t-*test. Letters of the English alphabet that represent a statistical value (such as *t*, *F*, and *p*) should be italicized however, Greek letters representing statistical values (such as χ) are generally not italicized. The degrees of freedom associated with the test should be in parentheses following the statistical letter or symbol. Then, after an equals sign, the actual value of the test statistic is reported to two decimal places. Make sure that you have a space on either side of the equals sign. After a comma comes the *p*-value (notice the italics) *p*-values are reported in the “.000” form, so no leading zeroes and three places after the decimal. The eta squared (η 2 ) is an effect size often reported for an ANOVA *F*-test. Measures of effect sizes such as *R 2* and *d* are common for regressions and *t*-tests respectively. Generally, the effect size is listed after the *p-*value, so if you do not immediately recognize it, it might be an unfamiliar effect size.

Here is an example of what results might look like for a *t*-test:

*t*(6)= 0.54, *p* = .547, *d* = .05

And here is an example for a chi-square test:

These results should not be listed alone, but always explained. For example, you might say “Variables X and Z were strongly negatively correlated, *r* = -.60,” or “the two groups were significantly different, *t*(4) = -4.21, *p* = .041. Participants in the Group A scored significantly higher (*M* = 1.23, *SD* = 0.81) than Group B (*M* = 0.52, *SD* = 0.10).”

## 3. Hypothesis Tests in APA Style

Nouns (p value, z test, t test) are not hyphenated, but as an adjective they are: t-test results, z-test score.

At the beginning of the results section, restated your hypothesis and then state if your results supported it. This should be followed by the data and statistics to support or reject the null hypothesis.

One-Way/Two-Way **ANOVA**: State the between-groups degrees of freedom, then state the within-groups degrees of freedom, followed by the F statistic and significance level. For example: “The main effect was significant, *F*(1, 149) = 2.12, *p* = .02.”

**Chi-Square test of Independence:** Report degrees of freedom and sample size in parentheses, then the chi-square value, followed by the significance level. For example:

“Animal response to the stimuli did not differ by species, *&Chi* 2 (1, N = 75) = 0.89, *p* = .25.”

**t tests**: Report the t value and significance level as follows: *t*(54) = 5.43, *p* < .001. What you put in the wording will differ slightly depending on if you have a one sample t-test, or a t-test for groups. Examples:

- One sample: “Younger teens woke up earlier (
*M*= 7:30,*SD*= .45) than teens in general,*t*(33) = 2.10,*p*= 0.31″ - Dependent/Independent samples: “Younger teens indicated a significant preference for video games (
*M*= 7.45,*SD*= 2.51) than books (*M*= 4.22,*SD*= 2.23),*t*(15) = 4.00,*p*< .001.”

Report **correlations **with degrees of freedom (N-2), followed by the significance level. For example: “The two sets of exam results are strongly correlated, r(55) = .49, p < .001.”

*Thank you to Mark Suggs for contributions to this article.*

**References:**

American Psychological Association. (2019). Publication Manual of the American Psychological Association, (7th ed).

Milan, JE, White, AA. Impact of a stage-tailored, web-based intervention on folic acid-containing multivitamin use by college women. Am J Health Promot. 2010 Jul-Aug24(6):388-95. doi: 10.4278/ajhp.071231143.

Sheldon. (2013). APA Dictionary of Statistics and Research Methods (APA Reference Books) 1st Edition. American Psychological Association (APA).

**Need help with a homework or test question?** With ** Facebook page**.

## How to report an F statistic in APA style? - Psychology

Once the statistics have been calculated, the next step is to organize the statistics into tables and figures so the reader can easily read and interpret the statistics. The key to making both tables and figures is to make them clear, both in appearance and in interpretation. When you have finished a table or figure, critically look over it. Will the reader be able to understand what everything in the table/figure means? A good practice is to ask a peer to review the table and give their comments about how it can be more clear.

- Every table must be discussed in the text. This means that you must explain the key elements of a table in the body of your research report. When discussing tables, refer to them by the table numbers only, not where they appear in reference to the text. This is acceptable: "As can be seen in Table 3. " This is not acceptable: "The table above shows. " The rationale for this rule is that the placement of the table may change when the final document is produced.
- Readers should be able to interpret a table just by looking at the table, without reading the body of the research report itself. Therefore, each table should have a clear title that focuses on the key statistics within that table and all acronyms and abbreviations should be explained in the table notes.

- Table titles should be brief, but clearly explain the table.
- All similar entries in the table should carry the same number of decimal points. In other words, every entry within the same column should have the same number of decimal places. In general, whole numbers such as degrees of freedom should have 0 decimal points, p-values and correlations should be rounded to three decimal places (thousandths place value), and all other numbers (aka means, standard deviations, t-values, F-values) should carry two decimal places (hundredths place value).
- Tables typically have a horizontal rule at the top, bottom, and after the table labels. Vertical lines are rarely used according to APA regulations.
- If you are not the original author of the table, you must cite the source of the table in a note at the bottom.
- Once the tables have been assembled and placed in the text, number the tables starting from 1 in the order that they appear in the work.
- The decimal points must be lined up in each of the columns. This makes the table easier to read.

- Is the table necessary?
- Is the entire table, including the title, headings, and notes, double spaced?
- Are all comparable tables in the manuscript consistent in presentation?
- Is the title brief but explanatory?
- Does every column have a column heading?
- Are all abbreviations, special use of italics, parentheses, and dashes, and special symbols explained?
- Are all probability levels correctly identified, and are asterisks attached to the appropriate table entries? Is a probability level assigned the same number of asterisks in all tables in the manuscript?
- Are all vertical rules eliminated?
- Will the table fit across the width of the page?
- If all or part of a copyrighted table is reproduced, do the table notes give full credit to the copyright owner?
- Is the table referred to in the text?

Below is an example of a table that follows APA format:

Each of the tables below has some problems. Identify the problems in each of the tables.

Notice the numbers in this table. Under the *Mean* heading, all numbers have the same decimal places (two), but the decimal point is not lined up. It is not easy to see that JS2 has the highest number. Under the *Standard Deviation* header, the numbers are all rounded to different decimal places. All numbers in this column should have the same number of decimal places: two. Here is a corrected version of the table:

What is wrong with this table?

First, the table has vertical rules. Second, the title of the table does not explain what the table represents. A more detailed title should be added. Below is a corrected version of the table. Note that this is not the "APA Format" for presenting the results of a t-test. The APA Manual does not give guidance on t-test tables. Indeed, it is often more common for t-test results to be written in the text instead of being presented in a table. For example, one might say "Females were found to have significantly more knowledge of child development than males (*t*(106) = 2.73, p

What's wrong with this table?

Most simply, the table is not double-spaced. However, the major problems with this table are more complex. First, the table does not explain what M and SD stand for. It is standard that M is Mean and SD is Standard Deviation, but this should still be clarified in a note at the bottom. More importantly, the mean for Mother's Education is 3.48. What does this mean? That mothers only attended 3.48 years of school? This is highly unlikely. The author probably coded education - perhaps 1 was none, 2 was primary only, 3 was secondary, etc. However, the reader does not know this! A note must be added explaining the coding. Mother's age is understandable. However, the next line is also confusing: Child gender = boy (yes-no %). Presumably, the first part means that 49% of the Nigerian sample was a boy, but what does *(yes-no %)* mean? Likewise, what does 1.06 mean for child behavior problems? Is this high (perhaps the range was 0 to 2), or is this low (perhaps the range was 1 to 6)? Without this information, this table is meaningless. Below is a corrected table with a note at the bottom to explain the table.

Figures can be an excellent way for readers to quickly understand and easily interpret the statistical findings. Graphs effectively illustrate means, frequencies, and percentages. Just like tables, figures should also be fully understandable without reading the text, but also be referenced in the text. Figures should also be numbered consecutively. Legends and notes should be included so the reader can easily interpret the figure. One difference is that figures do not have titles. Instead, a caption at the bottom of the figure functions both as a title and an explanation of the figure.

There are easy rules for determining the format and style of a graph. The researcher must use his/her expertise and judgment to determine whether to use a line graph, pie graph, bar graph, etc. and on developing the presentation of the graph. Again, it is helpful to have a peer review the chart to determine whether there might be a better way to assemble the graph.

- Is the figure necessary?
- Is the figure simple, clean, and free of extra detail?
- Are the data plotted accurately?
- Is the grid scale correctly proportioned?
- Is the lettering large and dark enough to read? Is the lettering compatible in size with the rest of the figure?
- Are terms spelled correctly?
- Are all abbreviations and symbols explained in the figure legend or figure caption? Are the symbols, abbreviations, and terminology in the figure consistent with those in the figure caption? In other figures? In the text?
- Are the figures numbered consecutively with Arabic numerals?
- Are all figures mentioned in the text?
- Are figures that are being reproduced or adapted from another source given proper credit in the figure caption?

Below are some sample figures for percentage, frequency, means, and a line graph, respectively.

## Datasets and Statistics

**APA Style Guide to Electronic References**

For a complete description of citation guidelines refer to the APA Style Guide to Electronic References (2012).

Pew Hispanic Center. (2008). *2007 Hispanic Healthcare Survey* [Data file and code book]. Available from Pew Hispanic Center Web site: http://pewhispanic.org/datasets/

Note: Available from, rather than Retrieved from, indicates that the URL takes you to a download site, rather than directly to the data set file itself.

Graphic Representation of Data

Centers for Disease Control and Prevention. (2005). [Interactive map showing percentage of respondents reporting "no" to, During the past month, did you participate in any physical activities?]. Behavioral Risk Factor Surveillance System. Retrieved from http://apps.nccd.cdc.gov/gisbrfss/default.aspx

*APA 6th edition*

For a complete description of citation guidelines, refer to the Publication Manual of the American Psychological Association, 6th edition (2010).

*Basic form:*

Author/Rightsholder. (Year). Title of data set (Version number) [Description of form]. Location: Name of producer.

or

Author/Rightsholder. (Year). Title of data set (Version number) [Description of form]. Retrieved from http://

*Example:*

Pew Hispanic Center. (2008). *2007 Hispanic Healthcare Survey* [Data file and code book]. Retrieved from http://pewhispanic.org/datasets/

Unpublished raw data from study, untitled work

*Basic form:*

Author, F. N. (Year). [Description of study topic]. Unpublished raw data.

*Example:*

Smith, J.A. (2006). [Personnel survey]. Unpublished raw data.

### How to Cite Statistics in APA Style

**APA Style Guide to Electronic References**

For a complete description of citation guidelines refer to the APA Style Guide to Electronic References (2012).

Graphic Representation of Data

Centers for Disease Control and Prevention. (2005). [Interactive map showing percentage of respondents reporting "no" to, During the past month, did you participate in any physical activities?]. *Behavioral Risk Factor Surveillance System*. Retrieved from http://apps.nccd.cdc.gov/gisbrfss/default.aspx.

**APA 6th**

For a complete description of citation guidelines, refer to the Publication Manual of the American Psychological Association, 6th edition (2010).

Citing Specific Parts of a Source

For in-text citations, indicate the page, chapter, figure, or table within the paranthetical citation.

(National Center for Education Statistics, 2008, Table 3)

Entry in a Reference Work

APA does not provide specific information on how to cite a statistical table, but use this general format to cite part of a source (e.g. a statistical table) in the bibliography.

Author. (Year). Title of entry. In Editor (Eds.), *Title of reference book* (pp. xxx-xxx). Retrieved from http:// OR Location: Publisher OR doi:xxxx.

## Reporting Multiple Regressions in APA format – Part One

So this is going to be a very different post from anything I have put up before. I am writing this because I have just spent the best part of two weeks trying to find the answer myself without much luck. Sure I came across the odd bit of advice here and there and was able to work a lot of it out, but so many of the websites on this topic leave out a bucket load of the information, making it difficult to know what they are actually going on about. So after two weeks of wading through websites, texts book and having multiple meetings with my university supervisors, I thought I would take the time to write up some instructions on how to report multiple regressions in APA format so that the next poor sap who has this issue doesn’t have to waste all the time I did. If you have no interest in statistics then I recommend you skip the rest of this post.

Ok let’s start with some data. Here is some that I pulled off the internet that will serve our purposes nicely. Here we have a list of sales people, along with their IQ level, their extroversion level and the total amount of money they made in sales this week. We want to see if IQ level and extroversion level can be used to predict the amount of money made in a week.

Now I am not going to show you how to enter the data into SPSS, if you don’t know how to do that I recommend you find out first and then come back. However, I will show you how to calculate the regression and all of the important assumptions that go along with it.

In SPSS you need to click **Analyse > Regression > Linear** and you will get this box, or one very much like it depending on your version of SPSS, come up.

The first thing to do is move your Dependent Variable, in this case Sales Per Week, into the **Dependent** box. Next move the two Independent Variables, IQ Score and Extroversion, into the **Independent(s)** box. We are going to use the **Enter** method for this data, so leave the **Method** dropdown list on its default setting. We now need to make sure that we also test for the various assumptions of a multiple regression to make sure our data is suitable for this type of analysis. There are seven main assumptions when it comes to multiple regressions and we will go through each of them in turn, as well as how to write them up in your results section. These assumptions deal with outliers, collinearity of data, independent errors, random normal distribution of errors, homoscedasticity & linearity of data, and non-zero variances. But before we look at how to understand this information let’s first set SPSS up to report it.

Note: If your data fails any of these assumptions then you will need to investigate why and whether a multiple regression is really the best way to analyse it. Information on how to do this is beyond the scope of this post.

On the **Linear Regression** screen you will see a button labelled **Save**. Click this and then tick the **Standardized** check box under the **Residuals** heading. This will allow us to check for outliers. Click **Continue** and then click the **Statistics** button.

Tick the box marked **Collinearity diagnostics**. This, unsurprisingly, will give us information on whether the data meets the assumption of collinearity. Under the **Residuals** heading also tick the **Durbin-Watson** check box. This will allow us to check for independent errors. Click **Continue** and then click the **Plots** button.

Move the option ***ZPRED** into the X axis box, and the option ***ZRESID** into the Y axis box. Then, under the **Standardized Residual Plots** heading, tick both the **Histogram** box and the **Normal probability plot** box. This will allow you to check for random normally distributed errors, homoscedasticity and linearity of data. Click **Continue**. As the assumption of non-zero variances is tested on a different screen, I will leave explaining how to carry that out until we get to it. For now, click **OK** to run the tests.

The first thing we need to check for is outliers. If we have any they will need to be dealt with before we can analyse the rest of the results. Scroll through your results until you find the box headed **Residual Statistics**.

Look at the **Minimum** and **Maximum** values next to **Std. Residual** (Standardised Residual) subheading. If the minimum value is equal or below -3.29, or the maximum value is equal or above 3.29 then you have outliers. Now as you can see in this example data we don’t have any outliers, but if you do here is what you need to do. Go back to your main data screen and you will see that SPSS has added a new column of numbers titled **ZRE_1**. This contains the standardised residual values for each of your participants. Go down the list and if you find any values equal or over 3.29, or less than or equal to -3.29 then that participant is an outlier and needs to be removed.

Once you have done this you will need to analyse your data again, in the same way described above, to make sure you have fixed the issue. You may find that you have new outliers when you do this and these too will need to be dealt with. In my recent experiment I had to run the check for outliers six times before I got them all and the standardised residual values were under 3.29 & -3.29 respectively. When it comes to writing this up what you put depends on what results you got. But something along the lines of one of these sentences will do.

An analysis of standard residuals was carried out on the data to identify any outliers, which indicated that participants 8 and 16 needed to be removed.

An analysis of standard residuals was carried out, which showed that the data contained no outliers (Std. Residual Min = -1.90, Std. Residual Max = 1.70).

To see if the data meets the assumption of collinearity you need to locate the **Coefficients** table in your results. Here you will see the heading **Collinearity Statistics**, under which are two subheadings, **Tolerance** and **VIF**.

If the VIF value is greater than 10, or the Tolerance is less than 0.1, then you have concerns over multicollinearity. Otherwise, your data has met the assumption of collinearity and can be written up something like this:

Tests to see if the data met the assumption of collinearity indicated that multicollinearity was not a concern (IQ Scores, Tolerance = .96,

VIF= 1.04 Extroversion, Tolerance = .96,VIF= 1.04).

To check see if your residual terms are uncorrelated you need to locate the **Model Summary** table and the **Durbin-Watson** value.

Durbin-Watson values can be anywhere between 0 and 4, however what you are looking for is a value as close to 2 as you can get in order to meet the assumption of independent errors. As a rule of thumb if the Durbin-Watson value is less than 1 or over 3 then it is counted as being significantly different from 2, and thus the assumption has not been met. Assuming it is you can write it up very simply like this:

The data met the assumption of independent errors (Durbin-Watson value = 2.31).

**Random Normally Distributed Errors & Homoscedasticity & Linearity**

I’m going to deal with these three things together as all the information comes from the same place. Now it is as this point that analysing the results becomes more of an art than a science as you need to look at some graphs and decide, pretty much for yourself, if they meet the various assumptions. We will start with the **Histogram**.

Now all going well this should have a nice looking normal distribution curve superimposed over a bar chart of your data. If you do then this means that your data has met the assumption of normally distributed residuals. However if you see something like the image below then you have problems.

Copyright Andy Field – Discovering Statistics Using SPSS

Next you need to look at the **Normal P-P Plot of Regression Standardized Residual**, and yes I am aware that is says Observed Cum Prob on it and that this is highly amusing. That aside, this basically tells you the same thing as the Histogram, just in a different way.

What you are looking for is for the dots to be on, or close, to the line running diagonally across the screen. If it looks something like the image below then again you have problems.

Copyright Andy Field – Discovering Statistics Using SPSS

When it comes to writing this information up you pretty much just have to describe what the two graphs look like. Something like this:

The histogram of standardised residuals indicated that the data contained approximately normally distributed errors, as did the normal P-P plot of standardised residuals, which showed points that were not completely on the line, but close.

Which brings us to the scatterplot, which will tell us if our data meets the assumptions of Homoscedasticity and Linearity . Now it is a bit hard to tell from the data we are using if these assumptions are met, as there are so few data points, and so I’m going to once again borrow some images from my textbook.

Copyright Andy Field – Discovering Statistics Using SPSS

Basically you want your scatterplot to look something like the top left hand image. If it looks like any of the others then one or both of the assumptions has not been met (The lines have been added to show the shape of the date, these will not appear on the actual scatterplot). Again this is more art than science and comes down to how you interpret the image. That said if your data has met all of the other assumptions then the chances are it will have met this one as well, so if you are a little unsure what the scatterplot is telling you, as you might be with the one produced with our data here, then look at your other results for guidance. And when it comes to writing it up, again you just say what you see.

The scatterplot of standardised predicted values ( Note: You may want to call it the “scatterplot of standardised residuals” instead, either is good ) showed that the data met the assumptions of homogeneity of variance and linearity.

As I said before I have left this one until last as you need to run a little bit of extra analysis to get the information you need. From the menus at the top select **Analyse > Descriptive Statistics > Descriptives** and you will get this box come up.

Add both your IVs and your DV to the **Variable(s)** box and then click **Options**.

Check the **Variance** box under the heading **Dispersion** and then click **Continue**. Click **OK** to run the analysis and you will see this new table added to your results titled **Descriptive Statistics**.

On this table you are looking for the heading **Variance**, and all you need to do is see whether the values are over zero or not. If they are then the assumption is met and can be reported like this:

The data also met the assumption of non-zero variances (IQ Scores, Variance = 122.51 Extroversion, Variance = 15.63 Sales Per Week, Variance = 152407.90).

Ok, so that is all the assumptions taken care of, now we can get to actually analysing our data to see if we have found anything significant.

UPDATE 20/09/2013 – When writing this post I used a number of images that I took from a powerpoint presentation on regressions that I got from my University. While I had no idea where they originally came from it has been pointed out to me that they are from Andy Field’s book Discovering Statistics Using SPSS and as such I should have acknowledged this fact when making use of them. I am now doing so and apologise for this oversight, it was never my intention to imply that the images were of my own creation. Also let me recommend that you pick up a copy of Andy Field’s book. I have been meaning to do so for some time, but have been lacking the funds to do so, as I have heard nothing but good things about it from my fellow psychology students. I have been told it is a great resource for all your SPSS and statistical needs.

## One-way ANOVA Part 1

Analysis of variance is often abbreviated ANOVA, and “one-way ANOVA” refers to ANOVA with one independent variable. Whereas a t-test is useful for comparing the means of **two** levels of an independent variable, one-way ANOVA is useful for comparing the means of **two or more** levels of an independent variable.

To test whether the means of the three conditions in Festinger and Carlsmith&rsquos (1959) experiment are unequal, select *ANOVA &rarr ANOVA* from the analysis menu. You should get the following dialog:

#### Curse you, JAMOVI trolls!

Hmm. looks like we&rsquove got something wrong with the dependent variable - *enjoyable* - but not the independent variable of *condition*. Jamovi does its best to guess the type of variables, that is, whether the variable is nominal, ordinal or contnuous (interval or ratio). In this case, Jamovi guessed that the dependent variable, as well as the indepndent variable, are nominal. To do an ANOVA, the dependent variable must be continuous, which it is, Jamovi just does not know that. And fortunately, it is an easy change ot make. iables ("Factors") be numbers. We use the same solution as last time: *Transform &rarr Automatic Recode*:

Return to the Anova Dialog by clicking on the *ANOVA* table in the output window. Move "condition" to "Fixed Factors" and "enjoyable" to "Dependent Variable" like below.

**Before you click "OK",** first click the "Options" button on the right side of the dialog (under "Contrasts" and "Post Hoc"). Another dialog appears, and you should check the options shown below: "Descriptive" and "Homogeneity of variance test":

Click "Continue" and then "OK". You should get the following output:

### ANOVA Output

#### Testing the null hypothesis that the means are equal

The table above is called an "ANOVA table" and it provides a summary of the actual analysis of variance. Your experimental hypothesis (what you hope to find) is that the means of the three groups are different from one another. Specifically, Festinger and Carlsmith&rsquos experimental hypothesis was that the mean of the One Dollar group will be higher than the mean of the other two groups. The null hypothesis is the "prediction of no effect." In this case, it is that the means of the three groups are equal. The output above estimates the probability that the null hypothesis is true, given the data you obtained. The ANOVA table provides you with the following information:

#### Testing an assumption of ANOVA

The above table is similar to the Levene&rsquos test that we saw in the output for the *t*-test. It tests whether the variances in the groups are equal. For the ANOVA to produce an unbiased test, the variances of your groups should be approximately equal. If the value under "Sig." (the *p*-value) is less than .05, it means that the variances are UNequal, and you should not use the regular old one-way ANOVA. In the table above, *p* = 0.210, so no problems: you can use the results that follow.

#### Interpreting the ANOVA output

You tested the null hypothesis that the means are equal and obtained a *p*-value of .02. Because the *p*-value is less than .05, you should reject the null hypothesis. You would report this as:

Although you know that the means are unequal, one-way ANOVA does not tell you *which* means are different from *which other* means. It would be very nice to know whether the mean in the One Dollar condition was higher than the means of the other two conditions. In ANOVA, testing whether a particular level of the IV is significantly different from another level (or levels) is called **post hoc** testing. Hey, that sounds familiar! Didn&rsquot we see a dialog heading called "Post Hoc"? Go ahead and open **post hoc**. You should get this:

#### The multiple comparison problem

If you set your alpha level to .05 (meaning that you decide to call any *p*-value below .05 "significant"), you will make a Type I error approximately 5% of the time. 5% translates to 1 out of 20 times. That means that if you perform 20 significance tests, each with an alpha level of .05, you can expect one of those 20 tests to yield *p* < .05 even when the data are random. As the number of tests increases, the probability of making a Type I error (a false positive, saying that there is an effect when there is no effect) increases. The multiple comparison problem is that when you do multiple significance tests, you can expect some of those to be significant **just by chance**. Fortunately, there is a solution:

### Tukey&rsquos HSD

First, note that the first word here is "Tukey", as in John Tukey the statistician, not as in the bird traditionally eaten at Thanksgiving. John Tukey developed a method for comparing all possible pairs of levels of a factor that has come to be known as "Tukey&rsquos Honestly Significant Difference (HSD) test". The results from the ANOVA indicated that the three means were not equal (*p* < .05), but it didn&rsquot tell you which means were different from which other means. Tukey&rsquos HSD does that: for every possible pair of levels, Tukey&rsquos HSD reports whether those means are significantly different. Tukey&rsquos HSD solves the problem by effectively *adjusting* the *p*-value of each comparison so that it corrects for multiple comparisons.

So, in that dialog for Post Hoc Comparisons, check the box next to "Tukey", then make sure "condition" is in the right hand box like shown. Some new output appears:

In the first row of the table above, the Control condition is compared against the One Dollar condition. The mean difference is -1.800 and the *p*-value, listed as Ptukey, is .023. Because *p* < .05, you can consider the difference between the Control and One Dollar condition "significant". The other two comparisons (Control vs. Twenty Dollars and One Dollar vs. Twenty Dollars) were not significant because the *p*-values are above .05.

#### Writing It Up in APA Style

To report the results of a one-way ANOVA, begin by reporting the significance test results. Then elaborate on those by presenting the pairwise comparison results and, along the way, insert descriptive statistics information to give the reader the means:

#### Common Errors!

Students commonly use the block of text above as a template for answering the homework problems involving ANOVA. That is a reasonable approach, but do not copy the template blindly.

Confidence interval units. The description of the confidence interval includes that it is on a -5 to +5 scale. Not all dependent variables that you use will be on such a scale. Some will be in degrees Fahrenheit, some will be in dollars, some will be in points on an exam. Be sure to use the correct units. Pairwise comparisons. In the example above, only one comparison was significant. On other problems, you may have 0, 1, 2, or 3 significant comparisons with 3 groups. You would need to describe each of those comparisons if it were significant. "more enjoyable" The dependent variable in this study asked participants how enjoyable the task was. If the dependent variable had been how much participants agreed with a statement, or how helpful they were, or how aggressive they were, you would need to modify the description from "more enjoyable" so that it fit those situations.

On the next page, we&rsquoll look at a way to present the results of a one-way ANOVA in a table.

## How do I Report Different Statistical Tests in APA Format?

While many people know American Psychological Association style as a way to document research sources, the format covers broader applications. Some research papers require scholars to report statistical values or tests as part of their work. The most common types of statistical information researchers have to illustrate in their papers can easily be documented using APA style.

Use an italicized, uppercase N to report the number of cases in an entire sample. For instance, N = 110. As the Purdue University Online Writing Lab notes, use an italicized, lowercase n to report the number of cases in the portion of a sample, such as n = 21.

Enclose confidence-level intervals in brackets, according to Purdue OWL. For example:

95% CIs [2.4, 2.2], [-6.0, 3.0], and [7.0, 2.25]

Report the correlation (r) and significance level (p) when reporting correlations in your paper, advises the University of Connecticut Writing Center. Ensure that you italicize the r and p when following the subsequent examples.

Summer temperature and ice cream consumption were significantly correlated, r = .71, p < 0.05.

An insignificant correlation of .12 (p = n.s.) was found between years of experience and number of errors.

Include the means (M) and standard deviations (SD) for each group as well as the t value (t), degrees of freedom (placed in parentheses directly next to the t) and significance level (p). M, SD, t and p must be italicized in APA Style. For example:

Aliens (M = 2.5, SD = .40) reported significantly higher levels of contentment than human beings (M = 1.8, SD = .21), t(1) = 6.79, p < .05.

List the same values for ANOVAs as you do t-tests, replacing the t-value, however, for the ANOVA's F-value. Include the numerator and denominator of the degrees of freedom in parentheses after the F-value, separated by a comma. Follow the example below, based on data from the University of Connecticut Writing Center.

A main effect of heart rate was discovered F(2, 99) = 9.81, p < .01. Men (M = 119.5, SD = 4.67) reported significantly less depressive symptoms than women (M = 141.0, SD = 5.24).