Data Collection and Analysis Plan: Evaluation Proposal

Paper Info
Page count 6
Word count 1680
Read time 8 min
Topic Health
Type Research Paper
Language 🇺🇸 US

Introduction

One way to determine the Skill Mastery And Resilience Training (SMART) treatment program’s effectiveness is to collect and evaluate data. The treatment program will be the independent variable, while the various outcomes (relating to the objectives listed below) are the dependent variables. The research goal will be determining the SMART treatment program’s appropriateness in helping individuals with adverse childhood experiences and their families recover and lead better lives. Specific objectives that would guide and direct the study include:

  1. To understand the impact of the SMART treatment program on the pain, suffering, and damage that individuals with adverse childhood experiences (ACE) and their families have had;
  2. To understand the impact of the SMART treatment program on the mental and general health of individuals with adverse childhood experiences and their families;
  3. To understand the impact of the SMART treatment program on divorce rates within the city of Santa Ana, California;
  4. To understand the impact of the SMART treatment program on the communication skills of individuals with adverse childhood experiences and their families;
  5. To understand the impact of the SMART treatment program on the problem-solving skills of individuals with adverse childhood experiences and their families;
  6. To understand the impact of the SMART treatment program on the emotional composure as a coping mechanism among individuals with adverse childhood experiences and their families.

The research hypotheses, which represent the author’s assumptions about the study’s possible outcomes, are based on the objectives listed above. The researcher will try to confirm or refute these hypotheses through the collection and analysis of data. Notably, hypotheses can either be alternative or null. The null hypotheses are given below:

  1. H0: The SMART treatment program does not alleviate pain, suffering, and damage that individuals with adverse childhood experiences (ACE) and their families have had
  2. H0: The SMART treatment program does not improve the mental and general health of individuals with adverse childhood experiences and their families
  3. H0: The SMART treatment program does not reduce divorce rates by 15% within the city of Santa Ana, California
  4. H0: The SMART treatment program does not aid the development of communication skills among individuals with adverse childhood experiences and their families
  5. H0: The SMART treatment program does not aid the development of problem-solving skills among individuals with adverse childhood experiences and their families
  6. H1: The SMART treatment program does not aid the development of emotional composure as a coping mechanism among individuals with adverse childhood experiences and their families

Research Design for the Evaluation

The research will take an experimental design where the author recruits participants, measures them against the outlined objectives, and places them into two categories. The first category will be the control group, hence the author will not apply the SMART treatment program. In the second category, the author will apply the SMART treatment model over one year. The control group will have 10 individuals while the experimental group will have 20 participants.

Experimental Research Design Diagram.
Figure 1: Experimental Research Design Diagram.

The experimental design is desirable in this case for various reasons. First, it gives the researcher a high level of control. The ability to use both the experimental and control groups gives the researcher a great deal of flexibility, allowing for a more accurate process of measuring and recording output (Rogers & Révész, 2020). The experimental design also allows the researcher to control other factors that may affect the results so that the study can focus on specific phenomena (Rogers & Révész, 2020). The disadvantage of experimental research is that it can produce artificial results that only apply to a specific situation.

Evaluation Sample

The evaluation sample will comprise of 30 individuals drawn from ten families. This convenience sample will help the researcher evaluate the SMART treatment program’s effectiveness before it is rolled out for a bigger population. Convince samples are non-probability types drawn from the part of the population nearest to the researcher. The author would choose the members at random, with the only condition being that the individuals have suffered adverse childhood experiences. The sample is also unlikely to be representative because it is a small number (30) relative to the entire population of individuals with adverse childhood experiences. Thankfully, the sample does not have to be representative in the beginning because this is a pilot project. Moreover, as an experimental study, the aim is not to generalize findings back to the population. Instead, it is to test the effectiveness of a novel treatment program and plan for its continued utilization in the future to attain bigger gains for a bigger sample.

When conducting the evaluation research with the selected sample, the ethical considerations will touch on recruitment of the sample, interaction with the participants, treatment of the collected data, and presentation of the final report. The author will recruit individuals and obtain their full and informed consent before conducting the study. The author will not include any children younger than 18 years without their parents or guardians’ consent. The author will keep the interactions with the participants professional (Snoek & Horstkötter, 2018). Any data obtained from the participants will be protected and handled carefully to prevent loss or damage and maintain the participants’ confidentiality. The author will also analyze the data carefully, taking precaution to avoid any biased assessments. The author will not attempt to alter the findings even if they do not show what was initially anticipated. Finally, the author will prepare a report that paints a clear picture of the research and the findings without unduly changing or altering the outcomes to reflect a particular point of view.

Measurement Tools and Data Sources

Data will come from the participants and the author will measure six outcomes to understand the general impact and effectiveness of the SMART treatment program. The author will measure these outcomes during the start of the experiment and again once every month until the 12-month experimental period is over. The scientific tools for measuring emotional and similar outcomes are mostly in the form of questionnaires. They contained a series of standardized questions that the research asks the participants during the evaluation period to determine their progress. On top of that, the researcher will use observation and personal experience to approximate the SMART treatment program’s impact on individuals. The specific measurement tools for each of the primary issues in the six objectives or outcomes are shown in Table 1. Notably, all measurement tools and the data sources (i.e., the participants) are reliable and valid. The author will use a clinical dashboard to display patient information for each of the 30 participants and track their progress.

Key Objective area/areas Measurement Tool
  1. Pain, suffering, and damage
  1. Chronic Pain Acceptance Questionnaire – Revised (CPAQ-R) (McCraken et al., 2004)
  2. Impact Of Events Scale – Revised (IES-R) (Weiss and Marmar, 1996)
  3. Fear Questionnaire (Marks and Mathews, 1979)
  1. Mental and general health
  1. Health Anxiety Inventory (Salkovskis et al., 2002)
  2. Collaborative Case Conceptualization Rating Scale (Kuyken et al., 2016)
  3. Behavioral Activation for Depression Scale – Short Form (BADS-SF) (Manos et al., 2011)
  1. Divorce rates
-Author Observation
  1. Communication skills
  1. Dissociative Experiences Scale – II (Carlson and Putman, 1993)
  2. Shutdown Dissociation Scale (Shut-D) (Schalinski et al., 2015)
  1. Problem-solving skills
-Author Observation
  1. Emotional composure
-Author Observation

Table 1: Key Objective Areas and their Measurement Tools.

Plan for Analyzing Evaluation Data

The plan for analyzing the evaluation data is to look at participants’ changes after the treatments. The analysis will compare the outcomes at the end of the evaluation period with the baseline conditions (when the participants were starting the treatment). The researcher will use data to answer each research question by making comparisons and drawing a conclusion. The author will use relationship correlation statistical tests like the t-test because they help compare the variable correlations (Sharma, 2018). The researcher will summarize data in tables and graphs following the statistical analysis and consider its type, nature, and collection date. The mobile access portal (MAP) dashboard will be useful given that some participants may experience digital divide due to a lack of regular computer access that causes them to miss internet-based health innovations (Graetz et al., 2018). MAP will help participants manage outcomes better (Graetz et al., 2019; Ancker et al., 2017). It allows for more effective and efficient access to information to study and understand patient trends.

Strengths and Limitations of the Evaluation Plan

The evaluation plan’s main strength is that it will use a significant amount of data collected over 12 months. Data collected over longer periods tend to paint a clear picture than those collected. The main disadvantage is that most of the data will be qualitative, making it difficult to conduct statistical analysis. The researcher will have to convert the qualitative data to numerical information to facilitate statistical analysis. The evaluation plans are based on psychological questionnaires developed by researchers in the past. They contain standardized but generalized questions that help in the assessment of a given situation. Therefore, these assessment tools may not offer the accuracy needed to examine the participants in this research. Despite the disadvantage, the assessment tools provide a way of measuring unobservable outcomes such as pain and mental health. The information is useful in understanding the patients and their progress during treatment, thereby illustrating the effectiveness of the SMART treatment program.

Potential Findings

The Alternative Hypotheses represent the potential findings of the study. They are related to each of the six objectives of the study and include the following:

  1. H1: The SMART treatment program alleviates pain, suffering, and damage that individuals with adverse childhood experiences (ACE) and their families have had
  2. H1: The SMART treatment program improves the mental and general health of individuals with adverse childhood experiences and their families
  3. H1: The SMART treatment program reduces divorce rates by 15% within the city of Santa Ana, California
  4. H1: The SMART treatment program aids the development of communication skills among individuals with adverse childhood experiences and their families
  5. H1: The SMART treatment program aids the development of problem-solving skills among individuals with adverse childhood experiences and their families
  6. H1: The SMART treatment program aids the development of emotional composure as a coping mechanism among individuals with adverse childhood experiences and their families

References

Ancker, J. S., Nosal, S., Hauser, D., Way, C., & Calman, N. (2017). Access policy and the digital divide in patient access to medical records. Health Policy and Technology, 6(1), 3-11.

Carlson, E.B. & Putnam, F.W. (1993). An update on the Dissociative Experience Scale. Dissociation 6(1), p. 16-27.

Graetz, I., Huang, J., Brand, R., Hsu, J., & Reed, M. E. (2019). Mobile-accessible personal health records increase the frequency and timeliness of PHR use for patients with diabetes. Journal of the American Medical Informatics Association, 26(1), 50-54.

Graetz, I., Huang, J., Brand, R., Hsu, J., Yamin, C. K., & Reed, M. E. (2018). Bridging the digital divide: mobile access to personal health records among patients with diabetes. The American journal of managed care, 24(1), 43-51.

Kuyken, W., Beshai, S., Dudley, R., Abel, A., Görg, N., Gower, P., … & Padesky, C. A. (2016). Assessing competence in collaborative case conceptualization: Development and preliminary psychometric properties of the Collaborative Case Conceptualization Rating Scale (CCC-RS). Behavioural and Cognitive Psychotherapy, 44(2), 179-192.

Manos, R. C., Kanter, J. W., & Luo, W. (2011). The Behavioral Activation for Depression Scale-Short Form: Development and validation. Behavior Therapy, 42, 726-739.

Marks, I. M., & Mathews, A. M. (1979). Brief standard self-rating for phobic patients. Behaviour Research and Therapy, 17(3), 263-267.

McCraken, L. M., Vowles, K. E. & Eccleston, C. (2004). Acceptance of chronic pain: component analysis and a revised assessment method. Pain, 107, 159-166.

Rogers, J., & Révész, A. (2020). Experimental and quasi-experimental designs. The Routledge handbook of research methods in applied linguistics. New York: Routledge.

Salkovskis, P. M., Rimes, K. A., Warwick, H. M. C., & Clark, D. M. (2002). The Health Anxiety Inventory: development and validation of scales for the measurement of health anxiety and hypochondriasis. Psychological Medicine, 32(05), 843-853.

Schalinski, I., Schauer, M., & Elbert, T. (2015). The Shutdown Dissociation Scale (Shut-D). European Journal of Psychotraumatology, 6.

Sharma, B. (2018). Processing of data and analysis. Biostatistics and Epidemiology International Journal, 1(1), 3-5.

Snoek, A., & Horstkötter, D. (2018). Ethical issues in research on substance‐dependent parents: The risk of implicit normative judgements by researchers. Bioethics, 32(9), 620-627.

Weiss, D. S., & Marmar, C. R. (1996). The Impact of Event Scale – Revised. In J. Wilson & T. M. Keane (Eds.), Assessing psychological trauma and PTSD (pp. 399-411). New York: Guilford.

Cite this paper

Reference

NerdyBro. (2022, September 18). Data Collection and Analysis Plan: Evaluation Proposal. Retrieved from https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/

Reference

NerdyBro. (2022, September 18). Data Collection and Analysis Plan: Evaluation Proposal. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/

Work Cited

"Data Collection and Analysis Plan: Evaluation Proposal." NerdyBro, 18 Sept. 2022, nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.

References

NerdyBro. (2022) 'Data Collection and Analysis Plan: Evaluation Proposal'. 18 September.

References

NerdyBro. 2022. "Data Collection and Analysis Plan: Evaluation Proposal." September 18, 2022. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.

1. NerdyBro. "Data Collection and Analysis Plan: Evaluation Proposal." September 18, 2022. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.


Bibliography


NerdyBro. "Data Collection and Analysis Plan: Evaluation Proposal." September 18, 2022. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.

References

NerdyBro. 2022. "Data Collection and Analysis Plan: Evaluation Proposal." September 18, 2022. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.

1. NerdyBro. "Data Collection and Analysis Plan: Evaluation Proposal." September 18, 2022. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.


Bibliography


NerdyBro. "Data Collection and Analysis Plan: Evaluation Proposal." September 18, 2022. https://nerdybro.com/data-collection-and-analysis-plan-evaluation-proposal/.