Free
Research Article
Issue Date: March/April 2016
Published Online: February 19, 2016
Updated: January 01, 2021
Validation of the Evidence-Based Practice Confidence (EPIC) Scale With Occupational Therapists
Author Affiliations
  • Julie Helene Clyde, MSc, OT Reg. (Ont.), was Graduate Student, Rehabilitation Sciences Institute, School of Graduate Studies, University of Toronto, Toronto, Ontario, at the time of the study
  • Dina Brooks, PhD, PT, is Professor, Department of Physical Therapy, Faculty of Medicine, University of Toronto, Toronto, Ontario
  • Jill I. Cameron, PhD, is Associate Professor, Department of Occupational Science and Occupational Therapy, Faculty of Medicine, University of Toronto, Toronto, Ontario
  • Nancy M. Salbach, PhD, PT, is Associate Professor, Department of Physical Therapy, Faculty of Medicine, University of Toronto, Toronto, Ontario; nancy.salbach@utoronto.ca
Article Information
Evidence-Based Practice / Professional Issues
Research Article   |   February 19, 2016
Validation of the Evidence-Based Practice Confidence (EPIC) Scale With Occupational Therapists
American Journal of Occupational Therapy, February 2016, Vol. 70, 7002280010. https://doi.org/10.5014/ajot.2016.017061
American Journal of Occupational Therapy, February 2016, Vol. 70, 7002280010. https://doi.org/10.5014/ajot.2016.017061
Abstract

OBJECTIVE. This study evaluated the reliability, minimal detectable change (MDC), and construct validity of the Evidence-Based Practice Confidence (EPIC) scale among occupational therapists.

METHOD. In a cross-sectional mail survey, 126 occupational therapists completed the EPIC scale and a questionnaire to provide data for validity testing. Seventy-nine occupational therapists (63%) completed a second EPIC scale a median of 24 days later.

RESULTS. Test–retest reliability was .92 (95% confidence interval [.88, .95]). The MDC values at the 90% and 95% confidence levels were 3.9 percentage points and 4.6 percentage points, respectively. The total EPIC score was significantly associated with holding a master’s or doctoral degree; education in evidence-based practice (EBP); higher EBP knowledge and skill; and frequently searching, reading, and using research findings in clinical decision making (p < .05).

CONCLUSION. The EPIC scale has excellent reliability and acceptable construct validity for use in evaluating EBP self-efficacy among occupational therapists.

Major barriers to implementing research findings in occupational therapy practice include a lack of protected time to search for and incorporate research findings into decision making (Bennett et al., 2003; McCluskey & Lovarini, 2005; Salls, Dolhi, Silverman, & Hansen, 2009); insufficient skill and knowledge to retrieve and critically appraise the research (Bennett et al., 2003; McCluskey, 2003; Rappolt & Tassone, 2002); and a lack of self-efficacy to search for, interpret, and apply research evidence (Bennett et al., 2003; McCluskey & Lovarini, 2005; Welch & Dawson, 2006). Self-efficacy refers to a person’s perceived ability to execute a specific activity (Bandura, 1997). Self-efficacy beliefs are postulated to influence a person’s motivation, thought, affect, and decision to engage in or avoid particular activities or settings (Bandura, 1997). Self-efficacy is considered a key theoretical construct affecting the implementation of evidence-based practice (EBP) among health care professionals (Cane, O’Connor, & Michie, 2012). Perceived self-efficacy is positively associated with level of degree held (Bennett et al., 2003; Salbach, Jaglal, & Williams, 2013); education in EBP (Bennett et al., 2003); and the frequency at which physical therapists search, read, and use professional literature in their clinical practice (Salbach, Guilcher, Jaglal, & Davis, 2009; Salbach, Jaglal, & Williams, 2013).
Self-efficacy is a modifiable variable that can be enhanced by experiencing successful performance (performance accomplishment), observing others experience success (vicarious experience), receiving positive and credible feedback (verbal persuasion), and feeling emotionally and physiologically stable during task performance (emotional arousal; Bandura, 1977). Performance accomplishment is considered the most influential source of efficacy information because it is based on actual experiences of achievement and mastery (Bandura, 1977). According to Bandura’s (1977)  Self-Efficacy Theory, a positive association is expected between self-efficacy and ability or skill in a related task. Therefore, if occupational therapists’ self-efficacy to acquire, appraise, and apply the research literature is increased, they may more frequently undertake the steps of EBP in clinical practice.
The investigation of EBP self-efficacy among occupational therapists is limited by a lack of validated assessment tools. Several studies in the occupational therapy literature have reported measuring perceived confidence or skill in undertaking the necessary activities associated with EBP (Bennett et al., 2003; Dysart & Tomlin, 2002; McCluskey & Lovarini, 2005; Salls et al., 2009). These studies, however, have used questionnaires that contain a limited number of EBP steps without strong supporting evidence of reliability or validity. Given the value of self-efficacy as a determinant of behavior change and an outcome of behavioral interventions, Salbach and Jaglal (2011)  developed the 11-item Evidence-Based Practice Confidence (EPIC) scale to assess confidence among health care professionals in the ability to perform the steps of EBP. Although its face and content validity were evaluated in multiple health professional groups, including occupational therapists, evaluation of its reliability and construct validity has been limited to physical therapists (Salbach, Jaglal, & Williams, 2013). Reliability is not a fixed property of a rating scale; rather, it is population dependent (Terwee et al., 2007). Therefore, the objectives of this study were to evaluate the EPIC scale’s test–retest reliability, floor and ceiling effects, measurement error, and known-groups construct validity.
Hypotheses concerning the evaluation of construct validity were formulated a priori on the basis of the conceptual framework for self-efficacy theory, empirical evidence of associations between self-efficacy beliefs, and other variables among health care professionals (Bandura, 1977; Bennett et al., 2003; Jette et al., 2003; Salbach, Jaglal, Korner-Bitensky, Rappolt, & Davis, 2007; Salbach, Jaglal, & Williams, 2013). Specifically, we hypothesized that the mean EPIC scale rating would be 7 percentage points higher among occupational therapists with a master’s or doctoral degree than among those with a diploma or bachelor’s degree; with versus without academic training in EBP; and who conduct online literature searches, read or review research literature related to their clinical practice, or use research literature in clinical decision making at a high versus a low frequency. We also hypothesized that correlations between EPIC scores and the Adapted Fresno Test (AFT; McCluskey & Bishop, 2009) of competence in EBP would be in the fair range (rs = .25–.50, p < .05; Colton, 1974). The difference of 7 percentage points was selected to exceed the MDC value at the 90% confidence level (MDC90) of 5.1 percentage points among physical therapists and to correspond with a medium effect size based on the standard deviation of the mean EPIC scale score reported among physical therapist (Salbach, Williams, & Jaglal, 2013).
Method
Study Design
A cross-sectional mail survey was conducted. The University of Toronto Office of Research Ethics approved the study. Consent was implied for participants who returned a completed questionnaire.
Participants and Sampling
Occupational therapists were considered eligible if they were registered with the College of Occupational Therapists of Ontario (COTO), the provincial regulatory body. An electronic mailing list of 4,749 registered occupational therapists was obtained from the COTO, and a group of therapists randomly sampled from the mailing list was surveyed. Because of the low response rate to this mailing, a second group of occupational therapists was randomly sampled from the mailing list (after removing therapists who were in the first sample) and surveyed. Participant responses were excluded from the reliability analysis if participants described taking part in a continuing education event that involved teaching the steps of EBP during the retest period. A unique code printed on the last page of the questionnaire and on each business-reply envelope was used to identify and link participant responses during data entry and analysis.
Recruitment
A modified Dillman (2007)  approach was implemented to optimize the response rate. Occupational therapists were mailed an information letter inviting them to participate, a questionnaire to verify eligibility and collect data for the validity analysis, a single copy of the EPIC scale, the AFT (for the evaluation of construct validity), and a business-reply envelope. The first response item on the questionnaire asked recipients to indicate whether they were an occupational therapist registered with the provincial regulatory body. If the recipient responded “no,” he or she was instructed to leave the remaining questionnaires blank and return them in the business-reply envelope provided so the recipient could be removed from the mailing list. If the recipient responded “yes,” the information letter contained instructions to complete and date the questionnaire, EPIC scale, and AFT and return them in the business-reply envelope. Eligible recipients who did not want to take part in the study were asked to return the blank questionnaires to inform the researchers of their refusal. A reminder letter and the same baseline package were mailed to nonresponders 3.5 wk after the initial mailing to optimize the response rate.
Within 2 days of receiving a completed baseline package for the construct validity analysis, a second copy of the EPIC scale and a two-item questionnaire were mailed out to the respondents to evaluate test–retest reliability. The questionnaire items asked participants to identify and describe participation in any educational activity targeted at improving their ability to implement EBP since they had completed the first EPIC scale. Thus, participants who completed the second copy of the EPIC scale and two-item questionnaire (i.e., the reliability sample) were also those participants who completed the baseline package and contributed data for the analysis of construct validity (i.e., the validity sample).
Data Collection
Evidence-Based Practice Confidence Scale.
The EPIC scale is a self-report questionnaire. Each item describes an EBP activity. Respondents are asked to rate their confidence in performing each activity on a scale ranging from 0% (no confidence) to 100% (completely confident). Item-level scores are averaged to obtain a summary score that ranges from 0 to 100 percentage points (Salbach & Jaglal, 2011). The EPIC scale’s face and content validity were established among health care professionals through expert review and cognitive interviewing techniques (Salbach & Jaglal, 2011).
The EPIC scale has demonstrated excellent test–retest reliability (intraclass correlation coefficient [ICC] = .89, 95% confidence interval [CI] [0.85, 0.91], n = 187) and internal consistency (Cronbach’s α = .89 [Cronbach, 1951 ], 95% CI [0.86, 0.91], n = 275) and acceptable construct validity among physical therapists (Salbach, Jaglal, & Williams, 2013). Results from an exploratory factor analysis supported the EPIC scale’s unidimensionality (Salbach, Williams, & Jaglal, 2013). The estimated MDC90 and MDC95 of the EPIC scale are 5.1 percentage points and 6.1 percentage points, respectively, among physical therapists (Salbach, Williams, & Jaglal, 2013). The baseline and follow-up copies of the EPIC scale included a place to record the date of completion.
Adapted Fresno Test.
The AFT (Version 1; McCluskey & Bishop, 2009) is a seven-item self-report instrument used to assess knowledge of and skill in implementing EBP. Items on the AFT are scored by comparing participants’ responses to a grading rubric. Item-level scores are then summed to obtain a total score that can range from 0 to 156 points. The interrater reliability (ICC = .96, 95% CI [0.83, 0.99]) and internal consistency (α = .74) are acceptable among occupational therapists (McCluskey & Bishop, 2009). One author (Clyde) scored the AFT after training with the senior author (Salbach).
Validity Testing.
We captured data on sociodemographic and practice characteristics and variables for construct validity testing, including level of degree held, education in EBP, participation in EBP activities, and participation in research, using items evaluated for face and content validity in previous research (Jette et al., 2003; Salbach et al., 2007; Salbach, Williams, & Jaglal, 2013; see Supplemental Appendix 1, available online at http://otjournal.net; navigate to this article, and click on “Supplemental”).
Analysis
Test–retest reliability was estimated using the ICC2, 1 (Streiner & Norman, 2008) and the associated 95% CI. An ICC of 1.00 indicates perfect reliability; ≥.75, excellent reliability; .40–.74, adequate reliability; and <.40, poor reliability (Andresen, 2000). When interpreting a measure clinically (e.g., at the individual level), an ICC of at least .90 and a lower 95% CI limit of at least .85 are recommended (Nunnally & Bernstein, 1994). The ICC value was used in the calculation of the standard error of measurement (SEM) according to the formula 1 SEM = σImage not available, where σ is the standard deviation of change scores and R is the reliability coefficient (i.e., the ICC; Stratford, 2004). With repeated scoring, the true change in an individual score on the EPIC scale would lie within ±1 SEM of the observed change score 68% of the time. The SEM was used to compute the MDC at the 90% and 95% confidence levels, where MDC95 = 1.96 × (Image not available) × SEM and MDC90 = 1.65 × (Image not available) × SEM (Stratford, 2004). When an individual change score exceeds the MDC, there is reasonable certainty (at the specified confidence level) that it reflects true change, not error or noise (Finch, Brooks, Stratford, & Mayo, 2002).
Using baseline data, floor and ceiling effects were calculated as the percentage of participants scoring the minimum (0 percentage points) and maximum (100 percentage points) score, respectively, for each EPIC scale item and for the total EPIC score. A floor effect was considered present if >15% of respondents completing the scale achieved the lowest possible score; a ceiling effect, if >15% of respondents completing the scale achieved the highest possible score (Terwee et al., 2007).
Data from categorical variables were summarized using frequencies and percentages. Variables for hypothesis testing were recategorized to create binary variables. For statements with a positive response set, the strongly agree and agree categories were combined so that responses fell into the agree category; the neutral, disagree, and strongly disagree categories were combined so that responses fell into the neutral–disagree category.
Participation in EBP activities was measured by the number of times databases were searched, journal articles were read, and the professional literature was used in clinical decision making. Ordinal scale responses were dichotomized to obtain a similar sample size in each category.
After item categories were collapsed, the independent-samples t test or, if data were not normally distributed, the Mann–Whitney U test was used to test hypothesized relationships between EBP self-efficacy and binary variables, including the highest degree obtained, receipt of the foundations of EBP in academic preparation, and participation in EBP activities. A Type 1 error level of .05 determined statistical significance in hypothesis testing. Pearson correlation coefficients were used to evaluate the association between baseline ratings on the EPIC scale and the AFT. The r value can range from 0 to 1.00 and was interpreted as very good (≥.75), moderate (.50–.75), fair (.25–.50), and little or no relationship (0–.25; Colton, 1974).
Results
An initial survey of 538 occupational therapists conducted in September 2011 yielded a response rate of 26%. To increase the sample size, a survey of a second random sample of 539 occupational therapists obtained from the original mailing list was conducted in January 2012 that yielded a response rate of 29%. Figure 1 illustrates the individual and pooled sampling results from the Fall 2011 and Winter 2012 mailings. The median test–retest time interval, based on the date recorded on the EPIC scale at baseline and retest, was 24 days. Table 1 presents participant characteristics for the validity (n = 126) and reliability (n = 79) samples. Table 2 provides reliability results. The ICC for test–retest reliability of the total EPIC scale was .92 (95% CI [0.88, 0.95]). The SEM was 1.67 percentage points (n = 79), yielding an MDC95 of 4.6 percentage points and an MDC90 of 3.9 percentage points.
Figure 1.
Sampling results for mail survey.
Note. Numbers are presented for the pooled sample and for the individual fall (F) and winter (W) mailings. CE = continuing education; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale.
Figure 1.
Sampling results for mail survey.
Note. Numbers are presented for the pooled sample and for the individual fall (F) and winter (W) mailings. CE = continuing education; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale.
×
Table 1.
Characteristics of Study Participants and Their Practice
Characteristics of Study Participants and Their Practice×
CharacteristicValidity Sample (n = 126)Reliability Sample (n = 79)
n%n%
Age, yra
 20–291512.879.7
 30–394034.22534.7
 40–494034.22534.7
 ≥502218.81520.8
Femalea11792.97291.1
Highest degree obtaineda
 Certificate or diploma43.245.1
 Bachelor’s5947.23848.7
 Entry-level master’s3729.62126.9
 Applied or research master’s2419.21417.9
 Doctorate10.811.3
Years in clinical practicea
 <52318.31215.2
 5–103124.62025.3
 11–151310.378.9
 >155946.94050.6
% time spent in patient carea
 0118.967.6
 1–25129.8810.1
 26–502117.11316.5
 51–753830.92531.6
 76–1004133.32734.2
Serves as a clinical instructora7156.84759.5
Teaching institutiona7358.44557.0
Major service providedb
 Neurological3023.82329.1
 Consultation2721.41316.5
 Other area of direct patient care2419.01519.0
 General service provision1814.31113.9
 Continuing care1814.31620.3
 Mental health and addiction1713.51417.7
 Geriatric care1612.71620.3
 Musculoskeletal1612.71316.5
 Administration86.356.3
Primary practice settingb
 CCAC, visiting agency, or  client’s environment3527.82228.3
 Rehabilitation facility or hospital2419.01721.5
 General hospital2116.71417.7
 Children treatment center97.133.8
 Solo practice office97.167.6
 Preschool, school system, or  board of education86.322.5
 Mental health and addiction facility64.856.3
 Otherc2317.61720.4
Role
 Administrator43.245.1
 Manager75.645.1
 Owner–operator118.878.9
 Service provider, direct role9172.85873.4
 Service provider, professional leader75.656.3
 Consultant (nonclient care)97.245.1
 Instructor–educator21.600
 Researcher21.611.3
Table Footer NoteNote. CCAC = Community Care Access Centre.
Note. CCAC = Community Care Access Centre.×
Table Footer NoteaData were missing for between 1 and 9 participants.
Data were missing for between 1 and 9 participants.×
Table Footer NotebFrequencies may total >126, because participants could indicate more than one selection.
Frequencies may total >126, because participants could indicate more than one selection.×
Table Footer NotecExamples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.
Examples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.×
Table 1.
Characteristics of Study Participants and Their Practice
Characteristics of Study Participants and Their Practice×
CharacteristicValidity Sample (n = 126)Reliability Sample (n = 79)
n%n%
Age, yra
 20–291512.879.7
 30–394034.22534.7
 40–494034.22534.7
 ≥502218.81520.8
Femalea11792.97291.1
Highest degree obtaineda
 Certificate or diploma43.245.1
 Bachelor’s5947.23848.7
 Entry-level master’s3729.62126.9
 Applied or research master’s2419.21417.9
 Doctorate10.811.3
Years in clinical practicea
 <52318.31215.2
 5–103124.62025.3
 11–151310.378.9
 >155946.94050.6
% time spent in patient carea
 0118.967.6
 1–25129.8810.1
 26–502117.11316.5
 51–753830.92531.6
 76–1004133.32734.2
Serves as a clinical instructora7156.84759.5
Teaching institutiona7358.44557.0
Major service providedb
 Neurological3023.82329.1
 Consultation2721.41316.5
 Other area of direct patient care2419.01519.0
 General service provision1814.31113.9
 Continuing care1814.31620.3
 Mental health and addiction1713.51417.7
 Geriatric care1612.71620.3
 Musculoskeletal1612.71316.5
 Administration86.356.3
Primary practice settingb
 CCAC, visiting agency, or  client’s environment3527.82228.3
 Rehabilitation facility or hospital2419.01721.5
 General hospital2116.71417.7
 Children treatment center97.133.8
 Solo practice office97.167.6
 Preschool, school system, or  board of education86.322.5
 Mental health and addiction facility64.856.3
 Otherc2317.61720.4
Role
 Administrator43.245.1
 Manager75.645.1
 Owner–operator118.878.9
 Service provider, direct role9172.85873.4
 Service provider, professional leader75.656.3
 Consultant (nonclient care)97.245.1
 Instructor–educator21.600
 Researcher21.611.3
Table Footer NoteNote. CCAC = Community Care Access Centre.
Note. CCAC = Community Care Access Centre.×
Table Footer NoteaData were missing for between 1 and 9 participants.
Data were missing for between 1 and 9 participants.×
Table Footer NotebFrequencies may total >126, because participants could indicate more than one selection.
Frequencies may total >126, because participants could indicate more than one selection.×
Table Footer NotecExamples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.
Examples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.×
×
Table 2.
Test–Retest Reliability (n = 79)
Test–Retest Reliability (n = 79)×
Item (Shortened Descriptor)EPIC Score, M (SD)ICC95% CI
BaselineRetestDifferencea
1. Identify a gap in your knowledge.86.8 (11.0)87.2 (9.7)0.4 (9.0).64[0.49, 0.76]
2. Formulate a question to guide a literature search.72.2 (18.9)72.5 (16.4)0.4 (12.9).73[0.61, 0.82]
3. Effectively conduct an online literature search.67.4 (23.6)66.8 (20.0)−0.5 (16.1).73[0.61, 0.82]
4. Critically appraise the strengths and weaknesses of study methods.60.8 (23.0)58.0 (23.3)−2.8 (14.5).80[0.70, 0.86]
5. Critically appraise the measurement properties of standardized tests.57.0 (24.4)54.6 (25.6)−2.5 (16.0).80[0.70, 0.86]
6. Interpret statistical tests such as t tests or χ2 tests.41.9 (28.1)39.9 (27.7)−2.0 (18.4).79[0.69, 0.86]
7. Interpret statistical procedures such as linear or logistic regression36.5 (28.7)37.6 (29.0)1.1 (16.3).84[0.76, 0.89]
8. Determine whether evidence applies to your patient/client.71.8 (21.0)74.4 (17.8)2.6 (13.6).76[0.65, 0.84]
9. Ask about needs, values, and treatment preferences.91.0 (10.1)91.4 (10.0)0.4 (13.4).73[0.61, 0.82]
10. Decide on a course of action.80.4 (13.7)80.8 (15.0)0.4 (12.7).63[0.47, 0.74]
11. Continually evaluate the effect of your actions.81.2 (14.5)81.6 (14.8)0.4 (11.8).66[0.51, 0.77]
Total67.9 (14.9)67.7 (14.8)−0.2 (5.9).92[0.88, 0.95]
Table Footer NoteNote. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.
Note. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.×
Table Footer NoteaRetest mean − baseline mean.
Retest mean − baseline mean.×
Table 2.
Test–Retest Reliability (n = 79)
Test–Retest Reliability (n = 79)×
Item (Shortened Descriptor)EPIC Score, M (SD)ICC95% CI
BaselineRetestDifferencea
1. Identify a gap in your knowledge.86.8 (11.0)87.2 (9.7)0.4 (9.0).64[0.49, 0.76]
2. Formulate a question to guide a literature search.72.2 (18.9)72.5 (16.4)0.4 (12.9).73[0.61, 0.82]
3. Effectively conduct an online literature search.67.4 (23.6)66.8 (20.0)−0.5 (16.1).73[0.61, 0.82]
4. Critically appraise the strengths and weaknesses of study methods.60.8 (23.0)58.0 (23.3)−2.8 (14.5).80[0.70, 0.86]
5. Critically appraise the measurement properties of standardized tests.57.0 (24.4)54.6 (25.6)−2.5 (16.0).80[0.70, 0.86]
6. Interpret statistical tests such as t tests or χ2 tests.41.9 (28.1)39.9 (27.7)−2.0 (18.4).79[0.69, 0.86]
7. Interpret statistical procedures such as linear or logistic regression36.5 (28.7)37.6 (29.0)1.1 (16.3).84[0.76, 0.89]
8. Determine whether evidence applies to your patient/client.71.8 (21.0)74.4 (17.8)2.6 (13.6).76[0.65, 0.84]
9. Ask about needs, values, and treatment preferences.91.0 (10.1)91.4 (10.0)0.4 (13.4).73[0.61, 0.82]
10. Decide on a course of action.80.4 (13.7)80.8 (15.0)0.4 (12.7).63[0.47, 0.74]
11. Continually evaluate the effect of your actions.81.2 (14.5)81.6 (14.8)0.4 (11.8).66[0.51, 0.77]
Total67.9 (14.9)67.7 (14.8)−0.2 (5.9).92[0.88, 0.95]
Table Footer NoteNote. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.
Note. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.×
Table Footer NoteaRetest mean − baseline mean.
Retest mean − baseline mean.×
×
The percentage of participants reporting no confidence (i.e., 0%) exceeded 15% for one EPIC scale item: “Interpret study results obtained using statistical procedures such as linear or logistic regression” (17.5 percentage points). The percentage of participants reporting complete confidence (i.e., 100 percentage points) exceeded 15% for two items: Item 1, “Identify a gap in your knowledge” (25.4 percentage points), and Item 9, “Ask about needs, values, and treatment preferences” (41.3 percentage points). None of the participants obtained a total score of 0 or 100 percentage points on the EPIC scale. Table 3 presents the results of hypothesis testing for construct validation. A statistically significant association between EPIC scores and highest degree obtained; EBP education; and searching, reading, and using the research literature in clinical decision making was observed (p < .05), as was a significant correlation between EPIC and AFT scores (r = .21, p = .02).
Table 3.
Hypothesis Testing for Known-Groups Construct Validation (N = 126)
Hypothesis Testing for Known-Groups Construct Validation (N = 126)×
CharacteristicnEPIC Score, Mean (SD)EPIC Score, Median (IQR)Mean Difference (95% CI) or Ua (p)
Highest degreeb1,225a (<.001)
 Diploma or bachelor’s6364.5 (22.7)
 Master’s or doctorate6273.6 (16.8)
EBP educationb8.3 [2.2, 13.9]
 No3662.7 (16.1)
 Yes8371.0 (13.3)
Searching research literatureb1,319a (.04)
 0–1×/mo8369.1 (24.6)
 ≥2×/mo4172.2 (18.2)
Reading research literatureb8.5 [1.8, 15.3]
 0–5×/mo10466.8 (14.4)
 ≥6×/mo2175.3 (13.3)
Using research literatureb,c7.6 [1.3, 13.8]
 0–5×/mo9966.7 (14.8)
 ≥6×/mo2674.3 (11.8)
Table Footer NoteNote. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.
Note. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.×
Table Footer NoteaU statistic for nonparametric test.
U statistic for nonparametric test.×
Table Footer NotebData were missing for between 1 and 7 participants.
Data were missing for between 1 and 7 participants.×
Table Footer NotecIn clinical decision making.
In clinical decision making.×
Table 3.
Hypothesis Testing for Known-Groups Construct Validation (N = 126)
Hypothesis Testing for Known-Groups Construct Validation (N = 126)×
CharacteristicnEPIC Score, Mean (SD)EPIC Score, Median (IQR)Mean Difference (95% CI) or Ua (p)
Highest degreeb1,225a (<.001)
 Diploma or bachelor’s6364.5 (22.7)
 Master’s or doctorate6273.6 (16.8)
EBP educationb8.3 [2.2, 13.9]
 No3662.7 (16.1)
 Yes8371.0 (13.3)
Searching research literatureb1,319a (.04)
 0–1×/mo8369.1 (24.6)
 ≥2×/mo4172.2 (18.2)
Reading research literatureb8.5 [1.8, 15.3]
 0–5×/mo10466.8 (14.4)
 ≥6×/mo2175.3 (13.3)
Using research literatureb,c7.6 [1.3, 13.8]
 0–5×/mo9966.7 (14.8)
 ≥6×/mo2674.3 (11.8)
Table Footer NoteNote. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.
Note. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.×
Table Footer NoteaU statistic for nonparametric test.
U statistic for nonparametric test.×
Table Footer NotebData were missing for between 1 and 7 participants.
Data were missing for between 1 and 7 participants.×
Table Footer NotecIn clinical decision making.
In clinical decision making.×
×
Discussion
The results indicate that the EPIC scale has excellent test–retest reliability among occupational therapists. On the basis of total score, the EPIC scale does not demonstrate a floor or ceiling effect. Hypotheses related to associations between EPIC scale scores and degree held; education in EBP; and participation in searching, reading, and using the research literature were confirmed, which supports the construct validity of the EPIC scale among occupational therapists. In addition, EPIC scale scores were weakly correlated with AFT scores.
The point estimate of test–retest reliability of the total score among occupational therapists (ICC = .92) was slightly higher than that observed among physical therapists (ICC = .89; Salbach, Jaglal, & Williams, 2013). This reliability estimate is considered adequate to detect changes in EBP self-efficacy over time among individual occupational therapists (Nunnally & Bernstein, 1994). Change in the total score on the EPIC scale can be interpreted reliably.
The reliability of Items 4–8 (“Critically appraise the strengths and weaknesses of study methods,” “Critically appraise the measurement properties,” “Interpret statistical tests such as t tests or χ2,” “Interpret statistical procedures such as linear or logistic regression,” and “Determine if evidence applies to clinical practice”) was excellent. Items 4, 6, and 7 also had excellent reliability among physical therapists (Salbach, Jaglal, & Williams, 2013). The findings support the relevance of interpreting change on Items 4–8 as the influence of EBP education on EBP self-efficacy. Reliability for the remaining items was ≥.63, falling in the adequate range, with the lower bound of the 95% CI ranging from .47 to .61. The lower reliability of these items may be due to the lower variability in ratings on these items, which tended to be at the high end of the response scale. On the basis of the reliability findings from this study, we recommend that educators and researchers interpret change in EBP self-efficacy on the basis of total scores or item-level scores for Items 4–8 (Nunnally & Bernstein, 1994).
The median test–retest time interval of 24 days in this study (based on the date recorded on the EPIC scale at baseline and at retest) is longer than the recommended 2-wk interval (Streiner & Norman, 2008). The stability of the construct of interest in this study was optimized, however, by removing from the test–retest reliability analysis responses from 3 respondents who reported participating in an educational event targeted at improving the ability to implement EBP since completing the first EPIC scale.
The MDC90 and MDC95 estimates in this study (MDC90 = 3.9 percentage points, MDC95 = 4.6 percentage points) are slightly smaller than those observed for the EPIC scale among physical therapists (MDC90 = 5.1 percentage points, MDC95 = 6.1 percentage points; Salbach, Williams, & Jaglal, 2013). When used to evaluate a continuing education event among occupational therapists, the change in a respondent’s EPIC scale score must be greater than 4.6 percentage points to be interpreted as true change rather than as measurement error as a result of unrelated impacts on EBP self-efficacy when the EPIC scale was completed (Stratford, 2004).
The EPIC scale demonstrated no overall floor or ceiling effect, which means, generally, that it can be used to assess people with both a low and a high level of confidence in EBP. However, one EPIC scale item (Item 7, “Interpret statistical procedures such as linear or logistic regression”) demonstrated a floor effect, and two EPIC scale items (Item 1, “Identify gap in your knowledge,” and Item 9, “Ask about needs, values, and treatment preferences”) demonstrated a ceiling effect. Although EPIC scale Item 7 had excellent reliability, it did demonstrate a floor effect, which may underestimate improvement in EBP self-efficacy after a continuing education intervention. Therefore, the total EPIC score versus scores on single EPIC items should be analyzed to evaluate the effects of continuing education on EBP.
A statistically significant association between EPIC scores and ratings of all construct validity variables was observed. The hypothesized difference of 7 percentage points in mean EPIC scale scores was found between occupational therapists with a master’s or doctoral degree and those with a diploma or bachelor’s degree; occupational therapists with versus without academic preparation in EBP; and occupational therapists who read or reviewed research literature related to their clinical practice and used research literature in clinical decision making at a high versus a low frequency. Although the mean difference in EPIC scale scores between occupational therapists who frequently searched the research evidence and those who infrequently searched it was statistically significant, it was smaller than the 7 percentage points projected and may not reflect a true difference because it is smaller than the MDC95 of 4.6 percentage points and MDC90 of 3.9 percentage points estimated for this sample. The results suggest that the majority of the occupational therapists in this study infrequently conducted online literature searches. This finding is consistent with results from a previous study of Canadian occupational therapists (Rappolt & Tassone, 2002) that suggested that more than half of study participants rarely or never conducted online literature searches because they lacked the skills, access, or time to conduct them. As a result of insufficient variability in the frequency with which occupational therapists conduct online searches, EPIC scale scores may not be able to discriminate between occupational therapists on the basis of this variable.
Study results that confirmed the hypotheses that EPIC scale scores would be positively related to degree held, education in EBP, and participation in EBP activities are similar to those observed among physical therapists (Salbach, Williams, & Jaglal, 2013). The results also mirror previous research that showed that occupational therapists with EBP training (p < .05) were more confident in their EBP skills than occupational therapists without EBP training (Bennett et al., 2003).
According to Bandura’s Self-Efficacy Theory (Bandura, 1997), self-efficacy and capacity or skill should correlate for the same activity or behavior. A potential reason for the weak correlation observed between mean EPIC scale scores and mean AFT scores in this study could be related to the use of a self-report tool to evaluate competency in implementing EBP. The AFT asks respondents to recall knowledge and skills and describe how they would use EBP in the context of clinical scenarios, which may not be a true reflection of a respondent’s EBP skill level and what he or she actually applies in practice. A test that involves assessment of EBP competence through direct observation would be more appropriate. Furthermore, the low correlation may also be partly due to the low variability in the sample: Only 31 of a potential 108 participants (29%) scored in the upper half of the scoring range (78–156 of a maximum of 156 points) on the AFT. This result may be related to the fact that more than half of the sample consisted of senior therapists who may not have had EBP incorporated in their education curriculum.
Limitations
Some limitations should be noted. The response rates were low, which calls the representativeness of the study sample into question. However, the samples of occupational therapists included in the validity and reliability analyses were similar to occupational therapists in Ontario in terms of age (COTO, 2011) and in Canada in terms of age, gender, and highest degree obtained (Canadian Institute for Health Information, 2011), which supports the representativeness of the sample. Unfortunately, comparison of other participant and practice variables was not possible because of discrepancies between categorical variable definitions used in this study and the published data.
The test–retest time interval observed in this study is longer than the recommended 2-wk interval (Streiner & Norman, 2008). Removing data from respondents who reported on the retest that they had participated in EBP education helped to mitigate potential fluctuation in EBP self-efficacy during the longer-than-expected retest interval. Although an evaluation of the unidimensionality of a scale is recommended to confirm that all items measure the same construct (Terwee et al., 2007), such an evaluation was beyond the scope of the current study. The unidimensionality of the EPIC scale, demonstrated in physical therapists (Salbach, Williams, & Jaglal, 2013), should be verified among occupational therapists to support the use of a composite score and to evaluate internal consistency in this population.
Finally, although participants were instructed to indicate how confident they were in their current level of ability to perform EBP when completing the EPIC scale, they may have overestimated their EBP self-efficacy to conform to expectations. To mitigate inflated ratings of self-efficacy, confidentiality of results was emphasized in the information letter to encourage participants to accurately rate their EBP self-efficacy. Specifically, the letter stated that participation would only be known to the principal investigator and that no individual-level information would be shared with employers or the provincial regulatory body.
Implications for Occupational Therapy Practice
The findings of this study have the following implications for occupational therapy practice:
  • The EPIC scale has excellent reliability and acceptable construct validity and demonstrates no overall floor or ceiling effect among occupational therapists.

  • The EPIC scale can be used for descriptive purposes, to monitor change in EBP self-efficacy over time, and as an outcome measure to evaluate the impact of continuing education with the aim of increasing self-efficacy in implementing the process of EBP.

  • Further research is needed to verify the unidimensionality of the EPIC scale for use among occupational therapists and to investigate the relationship between EBP self-efficacy and EBP skill and knowledge.

Conclusion
The EPIC scale has excellent reliability and acceptable construct validity for the evaluation of EBP self-efficacy among occupational therapists. Given the weak correlation observed between EPIC and AFT scores, further exploration of the relationship between EBP self-efficacy beliefs and EBP knowledge and skill is required.
Acknowledgments
The study was funded by a Faculty of Medicine Continuing Education and Professional Development grant from the University of Toronto. Nancy M. Salbach and Jill I. Cameron hold Canadian Institutes of Health Research New Investigator and Ontario Ministry of Research and Innovation Early Researcher Awards. Dina Brooks holds a Canada Research Chair. Aspects of this article were presented at the 2013 Canadian Association of Occupational Therapists annual conference.
References
Andresen, E. M. (2000). Criteria for assessing the tools of disability outcomes research. Archives of Physical Medicine and Rehabilitation, 81(Suppl. 2), S15–S20. http://dx.doi.org/10.1053/apmr.2000.20619 [Article] [PubMed]
Andresen, E. M. (2000). Criteria for assessing the tools of disability outcomes research. Archives of Physical Medicine and Rehabilitation, 81(Suppl. 2), S15–S20. http://dx.doi.org/10.1053/apmr.2000.20619 [Article] [PubMed]×
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. http://dx.doi.org/10.1037/0033-295X.84.2.191 [Article] [PubMed]
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191–215. http://dx.doi.org/10.1037/0033-295X.84.2.191 [Article] [PubMed]×
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman.×
Bennett, S., Tooth, L., McKenna, K., Rodger, S., Strong, J., Ziviani, J., . . . Gibson, L. (2003). Perceptions of evidence-based practice: A survey of Australian occupational therapists. Australian Occupational Therapy Journal, 50, 13–22. http://dx.doi.org/10.1046/j.1440-1630.2003.00341.x [Article]
Bennett, S., Tooth, L., McKenna, K., Rodger, S., Strong, J., Ziviani, J., . . . Gibson, L. (2003). Perceptions of evidence-based practice: A survey of Australian occupational therapists. Australian Occupational Therapy Journal, 50, 13–22. http://dx.doi.org/10.1046/j.1440-1630.2003.00341.x [Article] ×
Canadian Institute for Health Information. (2011). Occupational therapists in Canada, 2011. Retrieved from http://www.cihi.ca
Canadian Institute for Health Information. (2011). Occupational therapists in Canada, 2011. Retrieved from http://www.cihi.ca×
Cane, J., O’Connor, D., & Michie, S. (2012). Validation of the theoretical domains framework for use in behaviour change and implementation research. Implementation Science, 7, 37. http://dx.doi.org/10.1186/1748-5908-7-37 [Article] [PubMed]
Cane, J., O’Connor, D., & Michie, S. (2012). Validation of the theoretical domains framework for use in behaviour change and implementation research. Implementation Science, 7, 37. http://dx.doi.org/10.1186/1748-5908-7-37 [Article] [PubMed]×
College of Occupational Therapists of Ontario. (2011). New directions: Annual report. Retrieved from http://www.coto.org
College of Occupational Therapists of Ontario. (2011). New directions: Annual report. Retrieved from http://www.coto.org×
Colton, T. (1974). Statistics in medicine (Vol. 52). Boston: Little, Brown & Co.
Colton, T. (1974). Statistics in medicine (Vol. 52). Boston: Little, Brown & Co.×
Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. http://dx.doi.org/10.1007/BF02310555 [Article]
Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. http://dx.doi.org/10.1007/BF02310555 [Article] ×
Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). Hoboken, NJ: Wiley.
Dillman, D. A. (2007). Mail and Internet surveys: The tailored design method (2nd ed.). Hoboken, NJ: Wiley.×
Dysart, A. M., & Tomlin, G. S. (2002). Factors related to evidence-based practice among U.S. occupational therapy clinicians. American Journal of Occupational Therapy, 56, 275–284. http://dx.doi.org/10.5014/ajot.56.3.275 [Article] [PubMed]
Dysart, A. M., & Tomlin, G. S. (2002). Factors related to evidence-based practice among U.S. occupational therapy clinicians. American Journal of Occupational Therapy, 56, 275–284. http://dx.doi.org/10.5014/ajot.56.3.275 [Article] [PubMed]×
Finch, E., Brooks, D., Stratford, P. W., & Mayo, N. E. (2002). Physical rehabilitation outcome measures. Hamilton, ON: BC Decker.
Finch, E., Brooks, D., Stratford, P. W., & Mayo, N. E. (2002). Physical rehabilitation outcome measures. Hamilton, ON: BC Decker.×
Jette, D. U., Bacon, K., Batty, C., Carlson, M., Ferland, A., Hemingway, R. D., . . . Volk, D. (2003). Evidence-based practice: Beliefs, attitudes, knowledge, and behaviors of physical therapists. Physical Therapy, 83, 786–805. [PubMed]
Jette, D. U., Bacon, K., Batty, C., Carlson, M., Ferland, A., Hemingway, R. D., . . . Volk, D. (2003). Evidence-based practice: Beliefs, attitudes, knowledge, and behaviors of physical therapists. Physical Therapy, 83, 786–805. [PubMed]×
McCluskey, A. (2003). Occupational therapists report a low level of knowledge, skill and involvement in evidence-based practice. Australian Occupational Therapy Journal, 50, 3–12. http://dx.doi.org/10.1046/j.1440-1630.2003.00303.x [Article]
McCluskey, A. (2003). Occupational therapists report a low level of knowledge, skill and involvement in evidence-based practice. Australian Occupational Therapy Journal, 50, 3–12. http://dx.doi.org/10.1046/j.1440-1630.2003.00303.x [Article] ×
McCluskey, A., & Bishop, B. (2009). The Adapted Fresno Test of competence in evidence-based practice. Journal of Continuing Education in the Health Professions, 29, 119–126. http://dx.doi.org/10.1002/chp.20021 [Article] [PubMed]
McCluskey, A., & Bishop, B. (2009). The Adapted Fresno Test of competence in evidence-based practice. Journal of Continuing Education in the Health Professions, 29, 119–126. http://dx.doi.org/10.1002/chp.20021 [Article] [PubMed]×
McCluskey, A., & Lovarini, M. (2005). Providing education on evidence-based practice improved knowledge but did not change behaviour: A before and after study. BMC Medical Education, 5, 40. http://dx.doi.org/10.1186/1472-6920-5-40 [Article] [PubMed]
McCluskey, A., & Lovarini, M. (2005). Providing education on evidence-based practice improved knowledge but did not change behaviour: A before and after study. BMC Medical Education, 5, 40. http://dx.doi.org/10.1186/1472-6920-5-40 [Article] [PubMed]×
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill.
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill.×
Rappolt, S., & Tassone, M. (2002). How rehabilitation therapists gather, evaluate, and implement new knowledge. Journal of Continuing Education in the Health Professions, 22, 170–180. http://dx.doi.org/10.1002/chp.1340220306 [Article] [PubMed]
Rappolt, S., & Tassone, M. (2002). How rehabilitation therapists gather, evaluate, and implement new knowledge. Journal of Continuing Education in the Health Professions, 22, 170–180. http://dx.doi.org/10.1002/chp.1340220306 [Article] [PubMed]×
Salbach, N. M., Guilcher, S. J., Jaglal, S. B., & Davis, D. A. (2009). Factors influencing information seeking by physical therapists providing stroke management. Physical Therapy, 89, 1039–1050. http://dx.doi.org/10.2522/ptj.20090081 [Article] [PubMed]
Salbach, N. M., Guilcher, S. J., Jaglal, S. B., & Davis, D. A. (2009). Factors influencing information seeking by physical therapists providing stroke management. Physical Therapy, 89, 1039–1050. http://dx.doi.org/10.2522/ptj.20090081 [Article] [PubMed]×
Salbach, N. M., & Jaglal, S. B. (2011). Creation and validation of the Evidence-Based Practice Confidence Scale for health care professionals. Journal of Evaluation in Clinical Practice, 17, 794–800. http://dx.doi.org/10.1111/j.1365-2753.2010.01478.x [Article] [PubMed]
Salbach, N. M., & Jaglal, S. B. (2011). Creation and validation of the Evidence-Based Practice Confidence Scale for health care professionals. Journal of Evaluation in Clinical Practice, 17, 794–800. http://dx.doi.org/10.1111/j.1365-2753.2010.01478.x [Article] [PubMed]×
Salbach, N. M., Jaglal, S. B., Korner-Bitensky, N., Rappolt, S., & Davis, D. (2007). Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Physical Therapy, 87, 1284–1303. http://dx.doi.org/10.2522/ptj.20070040 [Article] [PubMed]
Salbach, N. M., Jaglal, S. B., Korner-Bitensky, N., Rappolt, S., & Davis, D. (2007). Practitioner and organizational barriers to evidence-based practice of physical therapists for people with stroke. Physical Therapy, 87, 1284–1303. http://dx.doi.org/10.2522/ptj.20070040 [Article] [PubMed]×
Salbach, N. M., Jaglal, S. B., & Williams, J. I. (2013). Reliability and validity of the Evidence-Based Practice Confidence (EPIC) scale. Journal of Continuing Education in the Health Professions, 33, 33–40. http://dx.doi.org/10.1002/chp.21164 [Article] [PubMed]
Salbach, N. M., Jaglal, S. B., & Williams, J. I. (2013). Reliability and validity of the Evidence-Based Practice Confidence (EPIC) scale. Journal of Continuing Education in the Health Professions, 33, 33–40. http://dx.doi.org/10.1002/chp.21164 [Article] [PubMed]×
Salbach, N. M., Williams, J. I., & Jaglal, S. B. (2013). Reply to Dr. Bland: Despite error in formula, EPIC scale still precise. Journal of Continuing Education in the Health Professions, 33, 283. http://dx.doi.org/10.1002/chp.21195 [Article] [PubMed]
Salbach, N. M., Williams, J. I., & Jaglal, S. B. (2013). Reply to Dr. Bland: Despite error in formula, EPIC scale still precise. Journal of Continuing Education in the Health Professions, 33, 283. http://dx.doi.org/10.1002/chp.21195 [Article] [PubMed]×
Salls, J., Dolhi, C., Silverman, L., & Hansen, M. (2009). The use of evidence-based practice by occupational therapists. Occupational Therapy in Health Care, 23, 134–145. http://dx.doi.org/10.1080/073805902773305 [Article] [PubMed]
Salls, J., Dolhi, C., Silverman, L., & Hansen, M. (2009). The use of evidence-based practice by occupational therapists. Occupational Therapy in Health Care, 23, 134–145. http://dx.doi.org/10.1080/073805902773305 [Article] [PubMed]×
Stratford, P. W. (2004). Getting more from the literature: Estimating the standard error of measurement from reliability studies. Physiotherapy Canada/Physiotherapie Canada, 56, 27–30. http://dx.doi.org/10.2310/6640.2004.15377 [Article]
Stratford, P. W. (2004). Getting more from the literature: Estimating the standard error of measurement from reliability studies. Physiotherapy Canada/Physiotherapie Canada, 56, 27–30. http://dx.doi.org/10.2310/6640.2004.15377 [Article] ×
Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use. Oxford, England: Oxford University Press. http://dx.doi.org/10.1093/acprof:oso/9780199231881.001.0001
Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use. Oxford, England: Oxford University Press. http://dx.doi.org/10.1093/acprof:oso/9780199231881.001.0001×
Terwee, C. B., Bot, S. D. M., de Boer, M. R., van der Windt, D. A. W. M., Knol, D. L., Dekker, J., . . . de Vet, H. C. (2007). Quality criteria were proposed for measurement properties of health status questionnaires. Journal of Clinical Epidemiology, 60, 34–42. http://dx.doi.org/10.1016/j.jclinepi.2006.03.012 [Article] [PubMed]
Terwee, C. B., Bot, S. D. M., de Boer, M. R., van der Windt, D. A. W. M., Knol, D. L., Dekker, J., . . . de Vet, H. C. (2007). Quality criteria were proposed for measurement properties of health status questionnaires. Journal of Clinical Epidemiology, 60, 34–42. http://dx.doi.org/10.1016/j.jclinepi.2006.03.012 [Article] [PubMed]×
Welch, A., & Dawson, P. (2006). Closing the gap: Collaborative learning as a strategy to embed evidence within occupational therapy practice. Journal of Evaluation in Clinical Practice, 12, 227–238. http://dx.doi.org/10.1111/j.1365-2753.2005.00622.x [Article] [PubMed]
Welch, A., & Dawson, P. (2006). Closing the gap: Collaborative learning as a strategy to embed evidence within occupational therapy practice. Journal of Evaluation in Clinical Practice, 12, 227–238. http://dx.doi.org/10.1111/j.1365-2753.2005.00622.x [Article] [PubMed]×
Figure 1.
Sampling results for mail survey.
Note. Numbers are presented for the pooled sample and for the individual fall (F) and winter (W) mailings. CE = continuing education; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale.
Figure 1.
Sampling results for mail survey.
Note. Numbers are presented for the pooled sample and for the individual fall (F) and winter (W) mailings. CE = continuing education; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale.
×
Table 1.
Characteristics of Study Participants and Their Practice
Characteristics of Study Participants and Their Practice×
CharacteristicValidity Sample (n = 126)Reliability Sample (n = 79)
n%n%
Age, yra
 20–291512.879.7
 30–394034.22534.7
 40–494034.22534.7
 ≥502218.81520.8
Femalea11792.97291.1
Highest degree obtaineda
 Certificate or diploma43.245.1
 Bachelor’s5947.23848.7
 Entry-level master’s3729.62126.9
 Applied or research master’s2419.21417.9
 Doctorate10.811.3
Years in clinical practicea
 <52318.31215.2
 5–103124.62025.3
 11–151310.378.9
 >155946.94050.6
% time spent in patient carea
 0118.967.6
 1–25129.8810.1
 26–502117.11316.5
 51–753830.92531.6
 76–1004133.32734.2
Serves as a clinical instructora7156.84759.5
Teaching institutiona7358.44557.0
Major service providedb
 Neurological3023.82329.1
 Consultation2721.41316.5
 Other area of direct patient care2419.01519.0
 General service provision1814.31113.9
 Continuing care1814.31620.3
 Mental health and addiction1713.51417.7
 Geriatric care1612.71620.3
 Musculoskeletal1612.71316.5
 Administration86.356.3
Primary practice settingb
 CCAC, visiting agency, or  client’s environment3527.82228.3
 Rehabilitation facility or hospital2419.01721.5
 General hospital2116.71417.7
 Children treatment center97.133.8
 Solo practice office97.167.6
 Preschool, school system, or  board of education86.322.5
 Mental health and addiction facility64.856.3
 Otherc2317.61720.4
Role
 Administrator43.245.1
 Manager75.645.1
 Owner–operator118.878.9
 Service provider, direct role9172.85873.4
 Service provider, professional leader75.656.3
 Consultant (nonclient care)97.245.1
 Instructor–educator21.600
 Researcher21.611.3
Table Footer NoteNote. CCAC = Community Care Access Centre.
Note. CCAC = Community Care Access Centre.×
Table Footer NoteaData were missing for between 1 and 9 participants.
Data were missing for between 1 and 9 participants.×
Table Footer NotebFrequencies may total >126, because participants could indicate more than one selection.
Frequencies may total >126, because participants could indicate more than one selection.×
Table Footer NotecExamples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.
Examples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.×
Table 1.
Characteristics of Study Participants and Their Practice
Characteristics of Study Participants and Their Practice×
CharacteristicValidity Sample (n = 126)Reliability Sample (n = 79)
n%n%
Age, yra
 20–291512.879.7
 30–394034.22534.7
 40–494034.22534.7
 ≥502218.81520.8
Femalea11792.97291.1
Highest degree obtaineda
 Certificate or diploma43.245.1
 Bachelor’s5947.23848.7
 Entry-level master’s3729.62126.9
 Applied or research master’s2419.21417.9
 Doctorate10.811.3
Years in clinical practicea
 <52318.31215.2
 5–103124.62025.3
 11–151310.378.9
 >155946.94050.6
% time spent in patient carea
 0118.967.6
 1–25129.8810.1
 26–502117.11316.5
 51–753830.92531.6
 76–1004133.32734.2
Serves as a clinical instructora7156.84759.5
Teaching institutiona7358.44557.0
Major service providedb
 Neurological3023.82329.1
 Consultation2721.41316.5
 Other area of direct patient care2419.01519.0
 General service provision1814.31113.9
 Continuing care1814.31620.3
 Mental health and addiction1713.51417.7
 Geriatric care1612.71620.3
 Musculoskeletal1612.71316.5
 Administration86.356.3
Primary practice settingb
 CCAC, visiting agency, or  client’s environment3527.82228.3
 Rehabilitation facility or hospital2419.01721.5
 General hospital2116.71417.7
 Children treatment center97.133.8
 Solo practice office97.167.6
 Preschool, school system, or  board of education86.322.5
 Mental health and addiction facility64.856.3
 Otherc2317.61720.4
Role
 Administrator43.245.1
 Manager75.645.1
 Owner–operator118.878.9
 Service provider, direct role9172.85873.4
 Service provider, professional leader75.656.3
 Consultant (nonclient care)97.245.1
 Instructor–educator21.600
 Researcher21.611.3
Table Footer NoteNote. CCAC = Community Care Access Centre.
Note. CCAC = Community Care Access Centre.×
Table Footer NoteaData were missing for between 1 and 9 participants.
Data were missing for between 1 and 9 participants.×
Table Footer NotebFrequencies may total >126, because participants could indicate more than one selection.
Frequencies may total >126, because participants could indicate more than one selection.×
Table Footer NotecExamples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.
Examples included residential or long-term care, association, government, regulatory organization, nongovernmental organization, and community health center.×
×
Table 2.
Test–Retest Reliability (n = 79)
Test–Retest Reliability (n = 79)×
Item (Shortened Descriptor)EPIC Score, M (SD)ICC95% CI
BaselineRetestDifferencea
1. Identify a gap in your knowledge.86.8 (11.0)87.2 (9.7)0.4 (9.0).64[0.49, 0.76]
2. Formulate a question to guide a literature search.72.2 (18.9)72.5 (16.4)0.4 (12.9).73[0.61, 0.82]
3. Effectively conduct an online literature search.67.4 (23.6)66.8 (20.0)−0.5 (16.1).73[0.61, 0.82]
4. Critically appraise the strengths and weaknesses of study methods.60.8 (23.0)58.0 (23.3)−2.8 (14.5).80[0.70, 0.86]
5. Critically appraise the measurement properties of standardized tests.57.0 (24.4)54.6 (25.6)−2.5 (16.0).80[0.70, 0.86]
6. Interpret statistical tests such as t tests or χ2 tests.41.9 (28.1)39.9 (27.7)−2.0 (18.4).79[0.69, 0.86]
7. Interpret statistical procedures such as linear or logistic regression36.5 (28.7)37.6 (29.0)1.1 (16.3).84[0.76, 0.89]
8. Determine whether evidence applies to your patient/client.71.8 (21.0)74.4 (17.8)2.6 (13.6).76[0.65, 0.84]
9. Ask about needs, values, and treatment preferences.91.0 (10.1)91.4 (10.0)0.4 (13.4).73[0.61, 0.82]
10. Decide on a course of action.80.4 (13.7)80.8 (15.0)0.4 (12.7).63[0.47, 0.74]
11. Continually evaluate the effect of your actions.81.2 (14.5)81.6 (14.8)0.4 (11.8).66[0.51, 0.77]
Total67.9 (14.9)67.7 (14.8)−0.2 (5.9).92[0.88, 0.95]
Table Footer NoteNote. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.
Note. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.×
Table Footer NoteaRetest mean − baseline mean.
Retest mean − baseline mean.×
Table 2.
Test–Retest Reliability (n = 79)
Test–Retest Reliability (n = 79)×
Item (Shortened Descriptor)EPIC Score, M (SD)ICC95% CI
BaselineRetestDifferencea
1. Identify a gap in your knowledge.86.8 (11.0)87.2 (9.7)0.4 (9.0).64[0.49, 0.76]
2. Formulate a question to guide a literature search.72.2 (18.9)72.5 (16.4)0.4 (12.9).73[0.61, 0.82]
3. Effectively conduct an online literature search.67.4 (23.6)66.8 (20.0)−0.5 (16.1).73[0.61, 0.82]
4. Critically appraise the strengths and weaknesses of study methods.60.8 (23.0)58.0 (23.3)−2.8 (14.5).80[0.70, 0.86]
5. Critically appraise the measurement properties of standardized tests.57.0 (24.4)54.6 (25.6)−2.5 (16.0).80[0.70, 0.86]
6. Interpret statistical tests such as t tests or χ2 tests.41.9 (28.1)39.9 (27.7)−2.0 (18.4).79[0.69, 0.86]
7. Interpret statistical procedures such as linear or logistic regression36.5 (28.7)37.6 (29.0)1.1 (16.3).84[0.76, 0.89]
8. Determine whether evidence applies to your patient/client.71.8 (21.0)74.4 (17.8)2.6 (13.6).76[0.65, 0.84]
9. Ask about needs, values, and treatment preferences.91.0 (10.1)91.4 (10.0)0.4 (13.4).73[0.61, 0.82]
10. Decide on a course of action.80.4 (13.7)80.8 (15.0)0.4 (12.7).63[0.47, 0.74]
11. Continually evaluate the effect of your actions.81.2 (14.5)81.6 (14.8)0.4 (11.8).66[0.51, 0.77]
Total67.9 (14.9)67.7 (14.8)−0.2 (5.9).92[0.88, 0.95]
Table Footer NoteNote. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.
Note. CI = confidence interval; EPIC = Evidence-Based Practice Confidence scale; ICC = intraclass correlation coefficient; M = mean; SD = standard deviation.×
Table Footer NoteaRetest mean − baseline mean.
Retest mean − baseline mean.×
×
Table 3.
Hypothesis Testing for Known-Groups Construct Validation (N = 126)
Hypothesis Testing for Known-Groups Construct Validation (N = 126)×
CharacteristicnEPIC Score, Mean (SD)EPIC Score, Median (IQR)Mean Difference (95% CI) or Ua (p)
Highest degreeb1,225a (<.001)
 Diploma or bachelor’s6364.5 (22.7)
 Master’s or doctorate6273.6 (16.8)
EBP educationb8.3 [2.2, 13.9]
 No3662.7 (16.1)
 Yes8371.0 (13.3)
Searching research literatureb1,319a (.04)
 0–1×/mo8369.1 (24.6)
 ≥2×/mo4172.2 (18.2)
Reading research literatureb8.5 [1.8, 15.3]
 0–5×/mo10466.8 (14.4)
 ≥6×/mo2175.3 (13.3)
Using research literatureb,c7.6 [1.3, 13.8]
 0–5×/mo9966.7 (14.8)
 ≥6×/mo2674.3 (11.8)
Table Footer NoteNote. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.
Note. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.×
Table Footer NoteaU statistic for nonparametric test.
U statistic for nonparametric test.×
Table Footer NotebData were missing for between 1 and 7 participants.
Data were missing for between 1 and 7 participants.×
Table Footer NotecIn clinical decision making.
In clinical decision making.×
Table 3.
Hypothesis Testing for Known-Groups Construct Validation (N = 126)
Hypothesis Testing for Known-Groups Construct Validation (N = 126)×
CharacteristicnEPIC Score, Mean (SD)EPIC Score, Median (IQR)Mean Difference (95% CI) or Ua (p)
Highest degreeb1,225a (<.001)
 Diploma or bachelor’s6364.5 (22.7)
 Master’s or doctorate6273.6 (16.8)
EBP educationb8.3 [2.2, 13.9]
 No3662.7 (16.1)
 Yes8371.0 (13.3)
Searching research literatureb1,319a (.04)
 0–1×/mo8369.1 (24.6)
 ≥2×/mo4172.2 (18.2)
Reading research literatureb8.5 [1.8, 15.3]
 0–5×/mo10466.8 (14.4)
 ≥6×/mo2175.3 (13.3)
Using research literatureb,c7.6 [1.3, 13.8]
 0–5×/mo9966.7 (14.8)
 ≥6×/mo2674.3 (11.8)
Table Footer NoteNote. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.
Note. — = not applicable; CI = confidence interval; EBP = evidence-based practice; EPIC = Evidence-Based Practice Confidence scale; IQR = interquartile range; SD = standard deviation.×
Table Footer NoteaU statistic for nonparametric test.
U statistic for nonparametric test.×
Table Footer NotebData were missing for between 1 and 7 participants.
Data were missing for between 1 and 7 participants.×
Table Footer NotecIn clinical decision making.
In clinical decision making.×
×