Free
Research Article
Issue Date: February 03, 2015
Published Online: February 05, 2015
Updated: January 01, 2020
Effectiveness of a Clinical Decision Support System for Pointing Device Prescription
Author Affiliations
  • Alexandra Danial-Saad, PhD, OT, is Lecturer, Department of Occupational Therapy, Faculty of Social Welfare and Health Sciences, University of Haifa, Mount Carmel, Haifa, Israel, and Coordinator, Clinical Practicum, Academic Arab College for Education in Israel–Haifa
  • Tsvi Kuflik, PhD, is Associate Professor and Chair, Department of Information Systems, University of Haifa, Mount Carmel, Haifa, Israel
  • Patrice L. Weiss, PhD, OT, is Professor, Department of Occupational Therapy, Faculty of Social Welfare and Health Sciences, University of Haifa, Mount Carmel, Haifa, Israel
  • Naomi Schreuer, PhD, OT, is Senior Lecturer, Department of Occupational Therapy, Faculty of Social Welfare and Health Sciences, University of Haifa, Mount Carmel, Haifa, Israel; schreuer@research.haifa.ac.il
Article Information
Assistive Technology / Professional Issues
Research Article   |   February 03, 2015
Effectiveness of a Clinical Decision Support System for Pointing Device Prescription
American Journal of Occupational Therapy, February 2015, Vol. 69, 6902280010. https://doi.org/10.5014/ajot.2015.014811
American Journal of Occupational Therapy, February 2015, Vol. 69, 6902280010. https://doi.org/10.5014/ajot.2015.014811
Abstract

OBJECTIVE. We present a novel, knowledge-driven approach to prescription of pointing devices that uses the Ontology-Supported Computerized Assistive Technology Recommender (OSCAR), a clinical decision support system (CDSS).

METHOD. Fifty-five occupational therapists were divided into four groups: two assistive technology (AT) expert groups and two novice groups. Novice Group 1 used the OSCAR CDSS for the prescription process, and Novice Group 2 used the conventional method. OSCAR’s effectiveness and its impact on users were evaluated.

RESULTS. The ability of Novice Group 1 to make suitable pointing device prescriptions was similar to that of the two expert groups and was significantly better than that of Novice Group 2. The system positively affected Novice Group 1’s learning of the prescription process.

CONCLUSION. The structure and organized framework for clinical reasoning of the OSCAR CDSS appear to enable occupational therapy practitioners inexperienced in AT to achieve performance levels comparable to those of experts.

Occupational therapy plays a central role in the field of assistive technology (AT), a process that entails the prescription, provision, and adaptation of devices to support client needs (Petty, McArthur, & Treviranus, 2005). However, many occupational therapists do not achieve the level of AT expertise needed to perform the prescription process in an effective manner (Lahm & Sizemore, 2002). Indeed, the adoption of AT recommendations provided by clinicians (occupational therapists and other AT specialists) has been shown to be less than fully beneficial in the case of more than a third of clients (Schreuer, 2005).
A successful AT outcome occurs when a client is helped to achieve greater independence and productivity through the use of one or more AT devices (Cook & Miller-Polgar, 2012). The process begins by evaluating the client’s abilities and limitations and continues with a search for relevant information about suitable AT devices. This search is a formidable endeavor for occupational therapy practitioners and other AT specialists because of the continuing development of new technologies and devices (Ivanoff, Iwarsson, & Sonn, 2006). Moreover, the knowledge base is fragmented and lacks sufficiently intelligent search engines to support clinicians in carrying out searches for information about devices; the information is often distributed by manufacturers and may not be validated by an independent source (Cook & Miller-Polgar, 2012).
Another problem that clinicians face is the scarcity of reliable and valid assessment methods for AT device selection (Bernd, Van Der Pijl, & De Witte, 2009). Although existing models provide an important general framework for AT prescription (including the World Health Organization's [2010] International Classification of Functioning, Disability and Health for Children and Youth model, Scherer’s [2002, 2005]  Matching Person and Technology model, Cook and Miller-Polgar’s [2012]  Human Activity Assistive Technology model, and Kintsch and DePaula’s [2002]  framework), they do not always offer sufficiently detailed guidance for specific device selection (Danial-Saad, Weiss, & Schreuer, 2012).
A further problem is the insufficient instruction in AT prescription given to occupational therapy students during their academic training (Gitlow & Sanford, 2003). Although there has been a trend to increase the number of hours dedicated to AT in many occupational therapy curricula (Copley & Ziviani, 2004; Kanny & Anson, 1998), both students and educators have had a limited degree of satisfaction with the current preservice training programs, and students still need additional training after graduation (Brady, Long, Richards, & Vallin, 2007; Gitlow & Sanford, 2003). For example, in a review of preservice training courses, Lenker (2005)  found that most practitioners acquired AT skills on the job, learning from colleagues and through personal trial and error. The need for more effective training at the entry level to narrow the gap between novice and expert clinicians led the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA), one of the key professional AT organizations, to offer specialized certification in AT (Long, Woolverton, Perry, & Thomas, 2007; RESNA, 2012).
Clinical decision support systems (CDSSs) are interactive systems that emulate human reasoning in a given domain. They are designed to analyze data to help health professionals make clinical decisions about individual clients by using information and communication technologies, data, documents, knowledge, and models to identify and solve problems (Berner, 2007). The CDSS presents one or more recommendations to a clinician to consider for implementation (Shortliffe & Cimino, 2006), which are usually accompanied by a summary report of the client’s data record (Goldstein & Hoffman, 2003).
CDSS literature is rapidly advancing because of its considerable potential to improve health care provider performance, leading to better quality of care and clinical outcomes (Pearson et al., 2009). CDSSs were first developed about 40 yr ago and have a successful track record in improving the clinical intervention process. For example, MYCIN, one of the earliest CDSSs used as a diagnostic tool for infectious diseases (Shortliffe, 1976), represented expert knowledge by evaluating, prioritizing, and supporting multifactorial clinical decisions. It made use of simple-to-formulate if–then rules.
This article reports on a study that evaluated the ability of a novel CDSS, the Ontology-Supported Computerized Assistive Technology Recommender (OSCAR), to help novice clinicians learn about and attain a more effective, expert-driven, decision-making process for selecting physically controllable pointing devices by means of the prescription methodology.
Method
Research Design
The study was designed as a prospective cohort study that included two novice occupational therapy groups with similar qualifications in terms of general clinical experience. It compared the difference between OSCAR CDSS use and nonuse in the ability of novice clinicians to make expert AT recommendations. Ethical approval for the study was obtained from the University of Haifa ethics committee.
Participants
Four groups of female occupational therapy experts and novices (N = 55) participated in this study. Expert Group 1, ages 33–51 yr, included 6 senior occupational therapists with a minimum of 10 yr of work experience in AT assessment in clinical rehabilitation or educational settings. Two of the therapists taught AT courses in academic institutions, and all had a master’s degree in occupational therapy. This group took part in developing and constructing the OSCAR CDSS. Because Expert Group 1 was involved in all of the CDSS stages, it was important to compare their recommended pointing device solutions with those of other experts who had not been involved in the development process, that is, Expert Group 2. Expert Group 2, ages 32–61 yr, included 9 senior occupational therapists with similar qualifications to those of Expert Group 1 (i.e., a minimum of 10 yr of work experience in AT assessment in clinical rehabilitation or educational settings).
Participants in Novice Group 1 (n = 20), ages 26–51 yr, had a bachelor’s degree (the entry-level degree for an occupational therapist in Israel) and between 2.5 and 24 yr of clinical experience but no experience in AT. This group used OSCAR to make pointing device prescriptions. Participants in Novice Group 2 (n = 20), ages 29–66 yr, had a bachelor’s degree and between 4 and 40 yr of clinical experience. This group used a conventional approach that included clinical observations and assessment to make pointing device prescriptions (Cook & Miller-Polgar, 2012; Scherer, 2012).
Instruments
OSCAR.
OSCAR was constructed by the authors as a knowledge-based system that uses a clinical knowledge base and a set of matching rules to propose matches between client characteristics, needs, and abilities and appropriate pointing device solutions by presenting three recommendations for each set of clinical data entered. The general architecture of OSCAR consists of four parts: a database, a reasoning engine, a user interface, and an administrator interface. The database contains information about the clients, clinicians, and pointing devices and a set of if–then matching rules. The reasoning engine contains the formulas used to combine the matching rules. The user interface enables clinicians to use the system by entering client data and obtaining recommendations for pointing devices.
Case Studies.
Three test case studies were used to evaluate how OSCAR generated AT device prescriptions. The case studies included clients of different ages, genders, and diagnoses (i.e., a 5-yr-old boy with global developmental disorder, 15-yr-old girl with cerebral palsy and intellectual and developmental disorders, a 65-yr-old man with benign Level C3–C4 spinal cord tumor with resulting peripheral neurological damage) and provided the necessary clinical background information about the three clients through both a written report and a 10-min video clip, in accordance with a standard clinical evaluation protocol. Video clips were used to avoid the difficulties associated with direct, real-time clinical observation as a data collection tool (e.g., the presence of the observer; Labov, 1972) and to reproduce the clinical case with exactitude for each participant. Because of time constraints, only two of the case studies (15-yr-old girl and 65-yr-old man) were used when testing the two novice groups.
Demographic Questionnaire.
A demographic questionnaire was created to gather data about the participants. It included their personal and professional backgrounds and experience in computer adaptation.
Assistive Technology Adaptation Questionnaire.
An assistive technology adaptation questionnaire (ATAQ) was constructed to compile the results of the adaptation process of Expert Group 1, Expert Group 2, and Novice Group 2. The questionnaire targeted the factors to be considered in the adaptation process, including the client’s ability to operate the pointing device, the name and features of suitable pointing devices, the clinical reasoning process used to reach the AT solution, sources of information for pointing devices, and additional helpful information.
The ATAQ was used as a focus group discussion guide to determine the key features and characteristics of the AT experts’ optimal device solution for each case. Each Expert Group 1 participant read a summary about and viewed the video clips of each client. The group member then used the ATAQ to compile a list of three alternate pointing devices for each case, from the most to the least suitable. Expert Group 1 members subsequently convened in a focus group to discuss their choices with respect to the impact on the prescription of the optimal device, the key information needed about the client, and the decision-making process. The focus group, moderated by the authors (who are also AT experts), thereby reached consensus on the three most appropriate pointing devices for each case. A 5-point Likert scale was then constructed to determine the similarity among the optimal pointing device solutions reached by the different groups.
Learning Questionnaire.
A six-item learning questionnaire (LQ), rated on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree), was constructed to assess the novice therapists’ perception of OSCAR’s effect on their ability to learn the decision-making process for prescribing a pointing device. This assessment was based on Rolston’s (1988)  conclusion that the organization of information in a CDSS can have a positive effect on a clinician’s ability to learn the decision-making process. The LQ related to the therapists’ familiarity with professional terminology, the appropriate sequence of steps to follow during the decision-making process, and their exposure to new information during the use of OSCAR. It also included four open-ended items, including statements such as “The organization of the information helped me arrange the stages” and “Describe one or two new facts or approaches you have learned following the use of the system.”
Online Data Collection Packages.
Two online data collection packages were compiled to gather responses from participants from three groups. Online Package 1 included a copy of the demographic questionnaire, the video clips and written reports of two of the three case studies, and a link to the OSCAR CDSS. The participants in Novice Group 1 completed this package. Online Package 2 was completed by the participants in Novice Group 2 and Expert Group 2. Its content was identical to the first online package but without a link to OSCAR and without the LQ. Expert Group 1 completed the data collection forms during the focus group using a hard copy.
Procedures
Expert Group 2 viewed the video clips and completed the ATAQ for each case (Online Package 2). Novice Group 1 was given a presentation about OSCAR and how to use it. Participants then opened Online Package 1, viewed each video clip, used OSCAR, decided which optimal device to use and explain why, and completed the LQ to assess OSCAR’s impact on their ability to learn about pointing device prescription. Novice Group 2 received the link for Online Package 2, viewed each video clip, and completed the ATAQ (without OSCAR).
Data Analysis
We carried out statistical analysis using IBM SPSS Statistics (Version 18; IBM Corporation, Armonk, NY). We used descriptive statistics (i.e., mean [M], median, and standard deviation [SD]) to summarize the results of the demographic questionnaires, the pointing device solutions, and the LQ. The nonparametric Mann–Whitney U test was used to compare the pointing device solutions between the participant groups. The results of the selection process, performed individually by each participant in Expert Group 1 and Expert Group 2, were then compared to determine whether significant differences existed between them. The similarities and differences in optimal pointing device solutions were compared between Novice Group 1 and the expert groups, between Novice Group 2 and the expert groups, and between Novice Group 1 and Novice Group 2.
Results
The descriptive statistics of the pointing device selection scores of the four groups are shown in Tables 1 and 2. The results of the Mann–Whitney U test of OSCAR’s effectiveness showed that the ATAQ scores obtained from the two expert groups did not differ significantly from each other for the three case studies (see Table 1). The ATAQ scores obtained from the two expert groups did not differ from Novice Group 1, which used OSCAR for two case studies (see Table 2). In contrast, Expert Group 1, Expert Group 2, and Novice Group 1 obtained significantly higher scores than Novice Group 2 (see Table 2).
Table 1.
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3×
Case StudyExpert GroupnMSDMedianMann–Whitney Up
1164.01.55.025.864
294.21.25.0
2164.50.54.515.699
264.60.55.0
3164.10.74.014.485
263.61.23.5
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
Table 1.
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3×
Case StudyExpert GroupnMSDMedianMann–Whitney Up
1164.01.55.025.864
294.21.25.0
2164.50.54.515.699
264.60.55.0
3164.10.74.014.485
263.61.23.5
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
×
Table 2.
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2×
Case StudyGroupnMSDMedianMann–Whitney Up
1Expert groups154.11.35.0142.805
Novice Group 1204.40.95.0
2Expert groups124.60.55.0112.774
Novice Group 1204.60.45.0
1Expert groups154.11.35.026<.001
Novice Group 2201.61.11.0
2Expert groups124.60.55.017<.0001
Novice Group 2201.71.21.0
1Novice Group 1204.40.95.023<.001
Novice Group 2201.61.11.0
2Novice Group 1204.60.55.027<.001
Novice Group 2201.71.21.0
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
Table 2.
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2×
Case StudyGroupnMSDMedianMann–Whitney Up
1Expert groups154.11.35.0142.805
Novice Group 1204.40.95.0
2Expert groups124.60.55.0112.774
Novice Group 1204.60.45.0
1Expert groups154.11.35.026<.001
Novice Group 2201.61.11.0
2Expert groups124.60.55.017<.0001
Novice Group 2201.71.21.0
1Novice Group 1204.40.95.023<.001
Novice Group 2201.61.11.0
2Novice Group 1204.60.55.027<.001
Novice Group 2201.71.21.0
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
×
To test the impact of OSCAR on learning the adaptation process, Novice Group 1 completed the LQ. The participants gave high total scores for all six items, indicating their ability to learn the AT adaptation process after the use of OSCAR (M = 4.4, SD = 0.5). All the participants expressed a positive attitude toward OSCAR by stating that they would use the system for either the adaptation process or as a support tool during decision making, with comments such as, “The system is easy to use, simple to understand and operate.”
The results for items related to the type of information provided (Item 1, M = 4.7, SD = 0.5; Item 4, M = 4.2, SD = 0.8) included comments such as, “It reminded me of the different kinds of hand grasps and their effect on choosing the right pointing device” and “The system reminded me of carrying out a task analysis for the pointing device and its suitability for the user’s functions.” The results for the item related to the organization of information (Item 2, M = 4.7, SD = 0.5) included comments such as, “In my work for the Ministry of Education, I was confused regarding the best way to organize the matching process. Using the system gave me the answer to that problem.” Statements about the display of information (Item 3, M = 4.2, SD = 1.1) and its terminology (Item 6, M = 4.5, SD = 0.7) demonstrated that the novices found that OSCAR improved their learning of the adaptation process, for example, “The well-defined and simple selection possibilities facilitate the adaptation process for the user.” The result for Item 5, “The system offered me information which made me change my pointing device adaptation plan” (M = 3.7, SD = 1.0), was somewhat lower, perhaps because of insufficient clarity in this item’s wording.
Discussion
The results of this study contribute both theoretically and clinically to progress in the domain of AT. Theoretically, the development and evaluation of OSCAR help to bridge the gap between existing general models for AT prescription and specific pointing device prescription that can support novice clinicians. The clinical contribution of OSCAR CDSS is that it supports novice clinicians in choosing expert-driven, well-reasoned options for AT prescription. This contribution will help to improve their decision-making abilities, reduce errors, and maximize use and retention of AT pointing devices, leading to improvement in general clinical skills. The innovative approach in OSCAR’s construction of the matching rules between the user characteristics and the pointing device features emulated experts’ clinical reasoning, which can be copied for other devices in the AT field. Moreover, the use of OSCAR helped novices understand the adaptation process by reminding them of important user characteristics that need to be considered, helping them learn professional terminology, and increasing their awareness of new information about existing pointing devices.
These findings provide solid evidence that a CDSS can significantly improve clinical practice performance in the health care disciplines as suggested by Garg et al. (2005), who showed that a CDSS improved practitioner performance and patient outcomes in 64% of the cases. More compelling evidence was shown by Kawamoto, Houlihan, Balas, and Lobach (2005), in which a CDSS was helpful in 86% of the 70 studies reviewed.
O’Neill, Dluhy, Fortier, and Michel (2004)  and Lenker (2005)  stated that using a CDSS provides novice clinicians with comprehensive, organized information based on expert clinical reasoning, helping them improve their performance despite the difficulty in dealing with the fragmented knowledge that exists in the field (Cook & Miller-Polgar, 2012). Fensel (2000)  stressed that CDSSs provide added validity to clinicians who need to solve complex problems and relate to new knowledge. When novice clinicians prescribe a pointing device, they are challenged by their lack of training (Gitlow & Sanford, 2003) and experience (Long et al., 2007), the confusing array of related models and frameworks (Lahm & Sizemore, 2002), and the lack of a systematic mechanism (Gailey, 2006) for implementing this complex adaptation process (Bernd et al., 2009; Scherer, 2012). OSCAR helped novice clinicians make a professional selection of a suitable pointing device and reduced the confusion that exists in the adaptation process.
With respect to the impact of a CDSS on the learning process, Berner (2007)  stressed that its use can help develop critical pathways that affect and guide decision making. This suggestion is supported by the scores for Item 2 on the LQ (M = 4.7, SD = 0.5), in which clinicians indicated that using OSCAR enables the development of clear guidelines for the adaptation process, thus partially closing the gap that existed in their preservice training in the AT field (Brady et al., 2007).
In addition to the advantages discussed previously, Varonen, Kortteisto, and Kaila (2008)  suggested that a CDSS may also serve as a data management tool and a memory aid. This idea supports the recommendations by Simborg, Starfield, Horn, and Yourtee (1976)  and Starfield et al. (1979)  that reminding clinicians of information that is necessary for accurate outcomes helps them guard against potential prescription errors and overlooking clinical issues. The results of the LQ in this study indicate that OSCAR helped remind novice clinicians about the information that should be included in their decision-making process.
The novice clinicians expressed a positive attitude regarding their intent to make use of OSCAR in their clinical practice. In fact, the attitude of the participants in the current study was more positive than that of participants in the studies of Varonen et al. (2008); Rousseau, McColl, Newton, Grimshaw, and Eccles (2003); Short, Frischer, and Bashford (2004); and Zheng, Padman, Johnson, and Diamond (2005), who expressed a certain degree of resistance to the adoption of a CDSS. These differences may be indicative of positive changes in clinical attitudes toward technology adoption that have occurred over the 5- to 8-yr period between the former and current studies (Johnson, Zheng, & Padman, 2014).
It is important to keep in mind that although CDSSs, such as OSCAR, may simulate experts’ decision making, they are meant only to assist in the decision-making process (Berner, 2007), and they do not relieve clinicians of their ultimate responsibility for these decisions (Berner & Lande, 2007). It is especially important to prevent clinicians from developing a dependency on the system’s recommendations and reducing trust in their own judgment because of the high expectations resulting from the quality of the system’s performance.
Limitations and Future Research
The current research has several limitations that should be addressed in future studies. Although OSCAR took into consideration some of the main user characteristics that can affect the prescription process, the effectiveness of the process is also dependent on the users’ environment and cultural and behavioral traits, which were not included in this initial version of the system. In the current study, the CDSS was constructed from the experts’ perspectives only. The inclusion of a focus group of AT clients will support the client-centered approach by adding to the social validity of the system. OSCAR was developed to assist AT when prescribing physically controllable pointing devices only and has not yet been generalized to include other AT devices without additional testing. In some cases, extending the CDSS to other hardware categories, such as adapted keyboards, will be a relatively straightforward process. In contrast, the inclusion of software-based AT solutions, such as virtual keyboards that are operated through scanning, will be more demanding in terms of the system’s structure.
Comparative studies of clinicians who speak various languages and come from different cultures and professional disciplines will contribute further validity to the OSCAR’s proof of concept and the clinical evidence base of its recommendations. The limited number of case studies used in the current study does not fully represent the variety of cases that a clinician will face. Thus, OSCAR should be further tested through the application of additional case studies. Finally, although OSCAR’s database comprised 35 pointing devices that represented all the categories that existed at the time of the study, other devices may have been added since then because of the rapid changes in the AT market. Currently, a link to OSCAR (http://www.links.org.il/en/member/Tamar-Weiss) allows the administrator to update the system and the eligible users.
Implications for Occupational Therapy Practice
This study has the following implications for occupational therapy practice:
  • This article describes the design, development, and evaluation of a novel, knowledge-driven methodology for enhancing the AT prescription process by means of a CDSS.

  • Using the novel methodology enhanced the AT prescription process when performed by novice clinicians, enabling them to reach the optimal AT solutions as defined by experts.

Acknowledgments
This work has been partially supported by a Shalem Foundation grant and the Werner Otto Scholarship for female Arab students. The authors cordially thank the contribution of the following AT experts for their time, effort, and willingness to share their experience in constructing and validating the ontology: Hagit Rubin, Iris Adato-Biran, Michal Emanoell, Naomi Grill, Orit Grinstein, and Talia Farber. The authors warmly thank Ran Habouba, Ella Nizan, Keren Chitiad, Yaron Yosseph‏, Yaniv Nisenboym, Shlomit Levikson, and Maya First for their help in the technical part of OSCAR development.
References
Bernd, T., Van Der Pijl, D., & De Witte, L. P. (2009). Existing models and instruments for the selection of assistive technology in rehabilitation practice. Scandinavian Journal of Occupational Therapy, 16, 146–158. http://dx.doi.org/10.1080/11038120802449362 [Article] [PubMed]
Bernd, T., Van Der Pijl, D., & De Witte, L. P. (2009). Existing models and instruments for the selection of assistive technology in rehabilitation practice. Scandinavian Journal of Occupational Therapy, 16, 146–158. http://dx.doi.org/10.1080/11038120802449362 [Article] [PubMed]×
Berner, E. S. (Ed.). (2007). Clinical decision support systems: Theory and practice (2nd ed.). New York: Springer Health Informatics Series.
Berner, E. S. (Ed.). (2007). Clinical decision support systems: Theory and practice (2nd ed.). New York: Springer Health Informatics Series.×
Berner, E. S., & Lande, T. J. (2007). Overview of clinical decision support systems. In E. S. Berner (Ed.), Clinical decision support systems: Theory and practice (2nd ed., pp. 3–22). New York: Springer.
Berner, E. S., & Lande, T. J. (2007). Overview of clinical decision support systems. In E. S. Berner (Ed.), Clinical decision support systems: Theory and practice (2nd ed., pp. 3–22). New York: Springer.×
Brady, R., Long, T. M., Richards, J., & Vallin, T. (2007). Assistive technology curriculum structure and content in professional preparation service provider training programs. Journal of Allied Health, 36, 183–192. [PubMed]
Brady, R., Long, T. M., Richards, J., & Vallin, T. (2007). Assistive technology curriculum structure and content in professional preparation service provider training programs. Journal of Allied Health, 36, 183–192. [PubMed]×
Cook, A. M., & Miller-Polgar, J. (2012). Essentials of assistive technology. St. Louis, MO: Elsevier.
Cook, A. M., & Miller-Polgar, J. (2012). Essentials of assistive technology. St. Louis, MO: Elsevier.×
Copley, J., & Ziviani, J. (2004). Barriers to the use of assistive technology for children with multiple disabilities. Occupational Therapy International, 11, 229–243. [Article] [PubMed]
Copley, J., & Ziviani, J. (2004). Barriers to the use of assistive technology for children with multiple disabilities. Occupational Therapy International, 11, 229–243. [Article] [PubMed]×
Danial-Saad, A., Kuflik, T., Weiss, P. L., & Schreuer, N. (2013). Building an ontology for assistive technology using the Delphi method. Disability and Rehabilitation: Assistive Technology, 8, 275–286. http://dx.doi.org/10.3109/17483107.2012.723238 [Article] [PubMed]
Danial-Saad, A., Kuflik, T., Weiss, P. L., & Schreuer, N. (2013). Building an ontology for assistive technology using the Delphi method. Disability and Rehabilitation: Assistive Technology, 8, 275–286. http://dx.doi.org/10.3109/17483107.2012.723238 [Article] [PubMed]×
Danial-Saad, A., Weiss, P. L., & Schreuer, N. (2012). Assessment of Computer Task Performance (ACTP) of children and youth with intellectual and developmental disability. Disability and Rehabilitation: Assistive Technology, 7, 450–458. 10.3109/17483107.2011.650780 [Article] [PubMed]
Danial-Saad, A., Weiss, P. L., & Schreuer, N. (2012). Assessment of Computer Task Performance (ACTP) of children and youth with intellectual and developmental disability. Disability and Rehabilitation: Assistive Technology, 7, 450–458. 10.3109/17483107.2011.650780 [Article] [PubMed]×
Fensel, D. (2000). Problem-solving methods: Understanding, development, description, and reuse. Lecture notes in artificial intelligence (LNAI) 1791. Berlin: Springer-Verlag.
Fensel, D. (2000). Problem-solving methods: Understanding, development, description, and reuse. Lecture notes in artificial intelligence (LNAI) 1791. Berlin: Springer-Verlag.×
Gailey, R. S. (2006). Predictive outcome measures versus functional outcome measures in the lower limb amputee. Journal of Prosthetics and Orthotics, 18, 51–60. [Article]
Gailey, R. S. (2006). Predictive outcome measures versus functional outcome measures in the lower limb amputee. Journal of Prosthetics and Orthotics, 18, 51–60. [Article] ×
Garg, A. X., Adhikari, N. K., McDonald, H., Rosas-Arellano, M. P., Devereaux, P. J., Beyene, J., … Haynes, R. B. (2005). Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: A systematic review. JAMA, 293, 1223–1238. http://dx.doi.org/10.1001/jama.293.10.1223 [Article] [PubMed]
Garg, A. X., Adhikari, N. K., McDonald, H., Rosas-Arellano, M. P., Devereaux, P. J., Beyene, J., … Haynes, R. B. (2005). Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: A systematic review. JAMA, 293, 1223–1238. http://dx.doi.org/10.1001/jama.293.10.1223 [Article] [PubMed]×
Gitlow, L., & Sanford, T. (2003). Assistive technology education needs of allied health professionals in a rural state. Journal of Allied Health, 32, 46–51. [PubMed]
Gitlow, L., & Sanford, T. (2003). Assistive technology education needs of allied health professionals in a rural state. Journal of Allied Health, 32, 46–51. [PubMed]×
Goldstein, K., & Hoffman, B. (2003). Hypertension recordkeeping and electronic management systems. In J. Izzo & R. Black (Eds.), Hypertension primer: The essentials of high blood pressure (pp. 393–396). Philadelphia: Lippincott Williams & Wilkins.
Goldstein, K., & Hoffman, B. (2003). Hypertension recordkeeping and electronic management systems. In J. Izzo & R. Black (Eds.), Hypertension primer: The essentials of high blood pressure (pp. 393–396). Philadelphia: Lippincott Williams & Wilkins.×
Ivanoff, S. D., Iwarsson, S., & Sonn, U. (2006). Occupational therapy research on assistive technology and physical environmental issues: A literature review. Canadian Journal of Occupational Therapy, 73, 109–119. http://dx.doi.org/10.1177/000841740607300203 [Article]
Ivanoff, S. D., Iwarsson, S., & Sonn, U. (2006). Occupational therapy research on assistive technology and physical environmental issues: A literature review. Canadian Journal of Occupational Therapy, 73, 109–119. http://dx.doi.org/10.1177/000841740607300203 [Article] ×
Johnson, M. P., Zheng, K., & Padman, R. (2014). Modeling the longitudinality of user acceptance of technology with an evidence-adaptive clinical decision support system. Decision Support Systems, 57, 444–453. http://dx.doi.org/10.1016/j.dss.2012.10.049 [Article]
Johnson, M. P., Zheng, K., & Padman, R. (2014). Modeling the longitudinality of user acceptance of technology with an evidence-adaptive clinical decision support system. Decision Support Systems, 57, 444–453. http://dx.doi.org/10.1016/j.dss.2012.10.049 [Article] ×
Kanny, E. M., & Anson, D. K. (1998). Current trends in assistive technology education in entry-level occupational therapy curricula. American Journal of Occupational Therapy, 52, 586–591. http://dx.doi.org/10.5014/ajot.52.7.586 [Article] [PubMed]
Kanny, E. M., & Anson, D. K. (1998). Current trends in assistive technology education in entry-level occupational therapy curricula. American Journal of Occupational Therapy, 52, 586–591. http://dx.doi.org/10.5014/ajot.52.7.586 [Article] [PubMed]×
Kawamoto, K., Houlihan, C. A., Balas, E. A., & Lobach, D. F. (2005). Improving clinical practice using clinical decision support systems: A systematic review of trials to identify features critical to success. BMJ, 330, 765. http://dx.doi.org/10.1136/bmj.38398.500764.8F [Article] [PubMed]
Kawamoto, K., Houlihan, C. A., Balas, E. A., & Lobach, D. F. (2005). Improving clinical practice using clinical decision support systems: A systematic review of trials to identify features critical to success. BMJ, 330, 765. http://dx.doi.org/10.1136/bmj.38398.500764.8F [Article] [PubMed]×
Kintsch, A., & DePaula, R. (2002). A framework for the adoption of assistive technology. In SWAAAC 2002: Supporting learning through assistive technology. Winter Park, CO: Assistive Technology Partners. Retrieved from http://l3d.cs.colorado.edu/clever/assets/pdf/ak-SWAAAC02.pdf
Kintsch, A., & DePaula, R. (2002). A framework for the adoption of assistive technology. In SWAAAC 2002: Supporting learning through assistive technology. Winter Park, CO: Assistive Technology Partners. Retrieved from http://l3d.cs.colorado.edu/clever/assets/pdf/ak-SWAAAC02.pdf×
Labov, W. (1972). Sociolinguistic patterns. Philadelphia: University of Pennsylvania Press.
Labov, W. (1972). Sociolinguistic patterns. Philadelphia: University of Pennsylvania Press.×
Lahm, E., & Sizemore, L. (2002). Factors that influence assistive technology decision making. Journal of Special Education Technology, 17, 15–26.
Lahm, E., & Sizemore, L. (2002). Factors that influence assistive technology decision making. Journal of Special Education Technology, 17, 15–26.×
Lenker, J. (2005). AT outcomes research: Important considerations for conducting clinically relevant studies. Occupational Therapy Now, 7, 14–18.
Lenker, J. (2005). AT outcomes research: Important considerations for conducting clinically relevant studies. Occupational Therapy Now, 7, 14–18.×
Long, T. M., Woolverton, M., Perry, D. F., & Thomas, M. J. (2007). Training needs of pediatric occupational therapists in assistive technology. American Journal of Occupational Therapy, 61, 345–354. http://dx.doi.org/10.5014/ajot.61.3.345 [Article] [PubMed]
Long, T. M., Woolverton, M., Perry, D. F., & Thomas, M. J. (2007). Training needs of pediatric occupational therapists in assistive technology. American Journal of Occupational Therapy, 61, 345–354. http://dx.doi.org/10.5014/ajot.61.3.345 [Article] [PubMed]×
O’Neill, E. S., Dluhy, N. M., Fortier, P. J., & Michel, H. E. (2004). Knowledge acquisition, synthesis, and validation: A model for decision support systems. Journal of Advanced Nursing, 47, 134–142. http://dx.doi.org/10.1111/j.1365-2648.2004.03072.x [Article] [PubMed]
O’Neill, E. S., Dluhy, N. M., Fortier, P. J., & Michel, H. E. (2004). Knowledge acquisition, synthesis, and validation: A model for decision support systems. Journal of Advanced Nursing, 47, 134–142. http://dx.doi.org/10.1111/j.1365-2648.2004.03072.x [Article] [PubMed]×
Pearson, S. A., Moxey, A., Robertson, J., Hains, I., Williamson, M., Reeve, J., & Newby, D. (2009). Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990–2007). BMC Health Services Research, 9, 154. http://dx.doi.org/10.1186/1472-6963-9-154 [Article] [PubMed]
Pearson, S. A., Moxey, A., Robertson, J., Hains, I., Williamson, M., Reeve, J., & Newby, D. (2009). Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990–2007). BMC Health Services Research, 9, 154. http://dx.doi.org/10.1186/1472-6963-9-154 [Article] [PubMed]×
Petty, L. S., McArthur, L., & Treviranus, J. (2005). Clinical report: Use of the Canadian Occupational Performance Measure in vision technology. Canadian Journal of Occupational Therapy, 72, 309–312. [Article]
Petty, L. S., McArthur, L., & Treviranus, J. (2005). Clinical report: Use of the Canadian Occupational Performance Measure in vision technology. Canadian Journal of Occupational Therapy, 72, 309–312. [Article] ×
Rehabilitation Engineering and Assistive Technology Society of North America. (2012). RESNA policies and procedures for the development of RESNA assistive technology standards. Arlington, VA: Author. Retrieved from http://web.resna.org/atStandards/documents/RESNAProceduresFinal2012-02-01.pdf
Rehabilitation Engineering and Assistive Technology Society of North America. (2012). RESNA policies and procedures for the development of RESNA assistive technology standards. Arlington, VA: Author. Retrieved from http://web.resna.org/atStandards/documents/RESNAProceduresFinal2012-02-01.pdf×
Rolston, D. W. (1988). Principles of artificial intelligence and expert systems development. New York: McGraw-Hill.
Rolston, D. W. (1988). Principles of artificial intelligence and expert systems development. New York: McGraw-Hill.×
Rousseau, N., McColl, E., Newton, J., Grimshaw, J., & Eccles, M. (2003). Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ, 326, 314–321. http://dx.doi.org/10.1136/bmj.326.7384.314 [Article] [PubMed]
Rousseau, N., McColl, E., Newton, J., Grimshaw, J., & Eccles, M. (2003). Practice based, longitudinal, qualitative interview study of computerised evidence based guidelines in primary care. BMJ, 326, 314–321. http://dx.doi.org/10.1136/bmj.326.7384.314 [Article] [PubMed]×
Scherer, M. J. (Ed.). (2002). Assistive technology: Matching device and consumer for successful rehabilitation. Washington, DC: American Psychological Association.
Scherer, M. J. (Ed.). (2002). Assistive technology: Matching device and consumer for successful rehabilitation. Washington, DC: American Psychological Association.×
Scherer, M. J. (2005). Assessing the benefits of using assistive technologies and other supports for thinking, remembering and learning. Disability and Rehabilitation, 27, 731–739. http://dx.doi.org/10.1080/09638280400014816 [Article] [PubMed]
Scherer, M. J. (2005). Assessing the benefits of using assistive technologies and other supports for thinking, remembering and learning. Disability and Rehabilitation, 27, 731–739. http://dx.doi.org/10.1080/09638280400014816 [Article] [PubMed]×
Scherer, M. J. (2012). Assistive technologies and other supports for people with brain impairment. New York: Springer.
Scherer, M. J. (2012). Assistive technologies and other supports for people with brain impairment. New York: Springer.×
Schreuer, N. (2005). Adaptations of computerized work environment and technology as facilitators for participation. Israel Journal of Occupational Therapy, 14, H61–H75.
Schreuer, N. (2005). Adaptations of computerized work environment and technology as facilitators for participation. Israel Journal of Occupational Therapy, 14, H61–H75.×
Short, D., Frischer, M., & Bashford, J. (2004). Barriers to the adoption of computerised decision support systems in general practice consultations: A qualitative study of GPs’ perspectives. International Journal of Medical Informatics, 73, 357–362. http://dx.doi.org/10.1016/j.ijmedinf.2004.02.001 [Article] [PubMed]
Short, D., Frischer, M., & Bashford, J. (2004). Barriers to the adoption of computerised decision support systems in general practice consultations: A qualitative study of GPs’ perspectives. International Journal of Medical Informatics, 73, 357–362. http://dx.doi.org/10.1016/j.ijmedinf.2004.02.001 [Article] [PubMed]×
Shortliffe, E. H. (1976). Computer-based medical consultations, MYCIN. New York: Elsevier Science.
Shortliffe, E. H. (1976). Computer-based medical consultations, MYCIN. New York: Elsevier Science.×
Shortliffe, E., & Cimino, J. (2006). Biomedical informatics: Computer applications in health care and biomedicine. New York: Springer.
Shortliffe, E., & Cimino, J. (2006). Biomedical informatics: Computer applications in health care and biomedicine. New York: Springer.×
Simborg, D. W., Starfield, B. H., Horn, S. D., & Yourtee, S. A. (1976). Information factors affecting problem follow-up in ambulatory care. Medical Care, 14, 848–856. [Article] [PubMed]
Simborg, D. W., Starfield, B. H., Horn, S. D., & Yourtee, S. A. (1976). Information factors affecting problem follow-up in ambulatory care. Medical Care, 14, 848–856. [Article] [PubMed]×
Starfield, B., Steinwachs, D., Morris, I., Bause, G., Siebert, S., & Westin, C. (1979). Patient–doctor agreement about problems needing follow-up visit. JAMA, 242, 344–346. http://dx.doi.org/10.1001/jama.1979.03300040030019 [Article] [PubMed]
Starfield, B., Steinwachs, D., Morris, I., Bause, G., Siebert, S., & Westin, C. (1979). Patient–doctor agreement about problems needing follow-up visit. JAMA, 242, 344–346. http://dx.doi.org/10.1001/jama.1979.03300040030019 [Article] [PubMed]×
Varonen, H., Kortteisto, T., & Kaila, M.; EBMeDS Study Group. (2008). What may help or hinder the implementation of computerized decision support systems (CDSSs): A focus group study with physicians. Family Practice, 25, 162–167. http://dx.doi.org/10.1093/fampra/cmn020 [Article] [PubMed]
Varonen, H., Kortteisto, T., & Kaila, M.; EBMeDS Study Group. (2008). What may help or hinder the implementation of computerized decision support systems (CDSSs): A focus group study with physicians. Family Practice, 25, 162–167. http://dx.doi.org/10.1093/fampra/cmn020 [Article] [PubMed]×
World Health Organization. (2010). International classification of functioning, disability and health for children and youth (ICF–CY). Geneva: Author.
World Health Organization. (2010). International classification of functioning, disability and health for children and youth (ICF–CY). Geneva: Author.×
Zheng, K., Padman, R., Johnson, M. P., & Diamond, H. S. (2005). Understanding technology adoption in clinical care: Clinician adoption behavior of a point-of-care reminder system. International Journal of Medical Informatics, 74, 535–543. http://dx.doi.org/10.1016/j.ijmedinf.2005.03.007 [Article] [PubMed]
Zheng, K., Padman, R., Johnson, M. P., & Diamond, H. S. (2005). Understanding technology adoption in clinical care: Clinician adoption behavior of a point-of-care reminder system. International Journal of Medical Informatics, 74, 535–543. http://dx.doi.org/10.1016/j.ijmedinf.2005.03.007 [Article] [PubMed]×
Table 1.
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3×
Case StudyExpert GroupnMSDMedianMann–Whitney Up
1164.01.55.025.864
294.21.25.0
2164.50.54.515.699
264.60.55.0
3164.10.74.014.485
263.61.23.5
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
Table 1.
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3
Differences in Assistive Technology Adaptation Questionnaire Scores Between Expert Groups 1 and 2 for Case Studies 1–3×
Case StudyExpert GroupnMSDMedianMann–Whitney Up
1164.01.55.025.864
294.21.25.0
2164.50.54.515.699
264.60.55.0
3164.10.74.014.485
263.61.23.5
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
×
Table 2.
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2×
Case StudyGroupnMSDMedianMann–Whitney Up
1Expert groups154.11.35.0142.805
Novice Group 1204.40.95.0
2Expert groups124.60.55.0112.774
Novice Group 1204.60.45.0
1Expert groups154.11.35.026<.001
Novice Group 2201.61.11.0
2Expert groups124.60.55.017<.0001
Novice Group 2201.71.21.0
1Novice Group 1204.40.95.023<.001
Novice Group 2201.61.11.0
2Novice Group 1204.60.55.027<.001
Novice Group 2201.71.21.0
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
Table 2.
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2
Differences in Assistive Technology Adaptation Questionnaire Scores Between the Two Expert Groups and Novice Groups 1 and 2 and Between Novice Groups 1 and 2 for Case Studies 1 and 2×
Case StudyGroupnMSDMedianMann–Whitney Up
1Expert groups154.11.35.0142.805
Novice Group 1204.40.95.0
2Expert groups124.60.55.0112.774
Novice Group 1204.60.45.0
1Expert groups154.11.35.026<.001
Novice Group 2201.61.11.0
2Expert groups124.60.55.017<.0001
Novice Group 2201.71.21.0
1Novice Group 1204.40.95.023<.001
Novice Group 2201.61.11.0
2Novice Group 1204.60.55.027<.001
Novice Group 2201.71.21.0
Table Footer NoteNote. M = mean; SD = standard deviation.
Note. M = mean; SD = standard deviation.×
×