How we design feasibility studies.

Authors: Bowen DJ (1) , Kreuter M (2) , Spring B (3) , Cofta-Woerpel L (4) , Linnan L (5) , Weiner D (6) , Bakken S (7) , Kaplan CP (8) , Squiers L (9) , Fabrizio C (10) , Fernandez M (11)
Affiliations:
(1) School of Public Health, Boston University (2) School of Public Health, Saint Louis University (3) Department of Behavioral Medicine, Northwestern University (4) National Cancer Institute’s Cancer Information Service and the Department of Behavioral Science, M.D. Anderson Cancer Center (5) Department of Health Behavior and Health Education, School of Public Health, University of North Carolina at Chapel Hill (6) Mashantucket Pequot Tribal Nation (7) School of Nursing, Columbia University (8) Department of Medicine, University of California San Francisco (9) Cancer Information Service, National Cancer Institute (10) National Cancer Institute’s Cancer Information Service, Yale Cancer Center (11) Center for Health Promotion and Prevention Research, Health Science Center, University of Texas
Source: Am J Prev Med. 2009 May;36(5):452-7
DOI: 10.1016/j.amepre.2009.02.002. Publication date: 2009 May E-Publication date: Not specified Availability: full text Copyright: © 2009 Published by Elsevier Inc.
Language: English Countries: Not specified Location: Not specified Correspondence address: Deborah J. Bowen, PhD,
Department of Social & Behavioral Sciences, Boston University, 801 Massachusetts Avenue, 3rd Floor, Boston MA 02118
Email : dbowen@bu.edu

Keywords

Article abstract

Public health is moving toward the goal of implementing evidence-based interventions. To accomplish this, there is a need to select, adapt, and evaluate intervention studies. Such selection relies, in part, on making judgments about the feasibility of possible interventions and determining whether comprehensive and multilevel evaluations are justified. There exist few published standards and guides to aid these judgments. This article describes the diverse types of feasibility studies conducted in the field of cancer prevention, using a group of recently funded grants from the National Cancer Institute. The grants were submitted in response to a request for applications proposing research to identify feasible interventions for increasing the utilization of the Cancer Information Service among underserved populations.

Article content

Introduction

The field of health promotion and disease prevention is moving toward the goal of implementing evidence-based interventions that have been rigorously evaluated and found to be both efficacious and effective. This will encourage the evaluation of the efficacy of additional interventions, using standards of the sort applied in the evidence reviews conducted by the Cochrane Collaboration (www.cochrane.org) and the Task Force on Community Preventive Services (www.thecommunityguide.org).

By intervention is meant any program, service, policy, or product that is intended to ultimately influence or change people’s social, environmental, and organizational conditions as well as their choices, attitudes, beliefs, and behaviors. Both early conceptual models of health education1 and more modern versions of health promotion2 indicate that interventions should focus on changeable behaviors and objectives; be based on critical, empirical evidence linking behavior to health; be relevant to the target populations; and have the potential to meet the intervention’s goals. In cancer prevention and control, intervention efficacy has been defined as meeting the intended behavioral outcomes under ideal circumstances. In contrast, effectiveness studies can be viewed as evaluating success in real-world, non-ideal conditions.3

Clearly, because of resource constraints, not all interventions can be tested for both efficacy and effectiveness. Guidelines are needed to help evaluate and prioritize those interventions with the greatest likelihood of being efficacious. Feasibility studies are relied on to produce a set of findings that help determine whether an intervention should be recommended for efficacy testing. The published literature does not propose standards to guide the design and evaluation of feasibility studies. This gap in the literature and in common practice needs to be filled as the fields of evidence-based behavioral medicine and public health practice mature.

This article presents ideas for designing a feasibility study. Included are descriptions of feasibility studies from all phases of the original cancer-control continuum: from basic social science to determine the best variables to target, through methods development, to efficacy and effectiveness studies, to dissemination research. The term feasibility study is used more broadly than usual to encompass any sort of study that can help investigators prepare for full-scale research leading to intervention. It is hoped that this article can prove useful both to researchers when they consider their own intervention design and to reviewers of intervention-related grants.

Employing Feasibility Studies

Feasibility studies are used to determine whether an intervention is appropriate for further testing; in other words, they enable researchers to assess whether or not the ideas and findings can be shaped to be relevant and sustainable. Such research may identify not only what—if anything—in the research methods or protocols needs modification but also how changes might occur. For example, a feasibility study may be in order when researchers want to compare different research and recruitment strategies. Gustafson4 found that African-American women report more mistrust of medical establishments than do white women. A feasibility study might qualitatively examine women’s reactions to a specific intervention handout that attempted to promote the trustworthiness in a medical institution. If women’s reactions were positive and in line with increased trust in the institution, the feasibility study would have served as a precursor to testing the effects of that handout in recruiting women to a randomized prevention trial.5

Performing a feasibility study may be indicated when:

  • community partnerships need to be established, increased, or sustained;
  • there are few previously published studies or existing data using a specific intervention technique;
  • prior studies of a specific intervention technique in a specific population were not guided by in-depth research or knowledge of the population’s socio-cultural health beliefs; by members of diverse research teams; or by researchers familiar with the target population and in partnership with the targeted communities;
  • the population or intervention target has been shown empirically to need unique consideration of the topic, method, or outcome in other research; or
  • previous interventions that employed a similar method have not been successful, but improved versions may be successful; or previous interventions had positive outcomes but in different settings than the one of interest.

 

Appropriate Areas of Focus

It is proposed that there are eight general areas of focus addressed by feasibility studies. Each is described below and summarized in Table 1.

  • Acceptability. This relatively common focus looks at how the intended individual recipients—both targeted individuals and those involved in implementing programs—react to the intervention.
  • Demand. Demand for the intervention can be assessed by gathering data on estimated use or by actually documenting the use of selected intervention activities in a defined intervention population or setting.
  • Implementation. This research focus concerns the extent, likelihood, and manner in which an intervention can be fully implemented as planned and proposed,6 often in an uncontrolled design.
  • Practicality. This focus explores the extent to which an intervention can be delivered when resources, time, commitment, or some combination thereof are constrained in some way.
  • Adaptation. Adaptation focuses on changing program contents or procedures to be appropriate in a new situation. It is important to describe the actual modifications that are made to accommodate the context and requirements of a different format, media, or population.7
  • Integration. This focus assesses the level of system change needed to integrate a new program or process into an existing infrastructure or program.8 The documentation of change that occurs within the organizational setting or the social/physical environment as a direct result of integrating the new program can help to determine if the new venture is truly feasible.
  • Expansion. This focus examines the potential success of an already-successful intervention with a different population or in a different setting.
  • Limited-efficacy testing. Many feasibility studies are designed to test an intervention in a limited way. Such tests may be conducted in a convenience sample, with intermediate rather than final outcomes, with shorter follow-up periods, or with limited statistical power.

 

Area of focus The feasibility study asks … Sample outcomes of interest
Acceptability To what extent is a new idea, program, process or measure judged as suitable, satisfying, or attractive to program deliverers? To program recipients?
  • Satisfaction
  • Intent to continue use
  • Perceived appropriateness
  • Fit within organizational culture
  • Perceived positive or negative effects on organization
  • Actual use
  • Expressed interest or intention to use
  • Perceived demand
Demand To what extent is a new idea, program, process, or measure likely to be used (i.e., how much demand is likely to exist?)
Implementation To what extent can a new idea, program, process, or measure be successfully delivered to intended participants in some defined, but not fully controlled, context?
  • Degree of execution
  • Success or failure of execution
  • Amount, type of resources needed to implement
  • Factors affecting implementation ease or difficulty
  • Efficiency, speed, or quality of implementation
  • Positive/negative effects on target participants
  • Ability of participants to carry out intervention activities
  • Cost analysis
Practicality To what extent can an idea, program, process, or measure be carried out with intended participants using existing means, resources, and circumstances and without outside intervention?
Adaptation To what extent does an existing idea, program, process, or measure perform when changes are made for a new format or with a different population?
  • Degree to which similar outcomes are obtained in new format
  • Process outcomes comparison between intervention use in two populations
Integration To what extent can a new idea, program, process, or measure be integrated within an existing system?
  • Perceived fit with infrastructure
  • Perceived sustainability
  • Costs to organization and policy bodies
  • Fit with organizational goals and culture
  • Positive or negative effects on organization
  • Disruption due to expansion component
Expansion To what extent can a previously tested program, process, approach, or system be expanded to provide a new program or service?
Limited efficacy Does the a new idea, program, process, or measure show promise of being successful with the intended population, even in a highly controlled setting?
  • Intended effects of program or process on key intermediate variables
  • Effect-size estimation
  • Maintenance of changes from initial change
Key areas of focus for feasibility studies and possible outcomes

Relating to the Real World

Green and Glasgow9 have pointed out the incongruity between increasing demands for evidence-based practice and the fact that most evidence-based recommendations for behavioral interventions are derived from highly controlled efficacy trials. The highly controlled nature of efficacy research is good in that it is likely more possible to draw causal inferences from the designs used (often randomized trials). But this focus on internal validity can reduce external relevance, and generalizability can decrease, limiting dissemination. Practitioners call for more studies to be conducted in settings where community constraints, for example, are prioritized over optimal conditions and settings—specifically testing the fit of interventions in real-world settings. Feasibility studies should be especially useful in helping to fill this important gap in the research literature, and new criteria and measures have been proposed (e.g., Reach, Efficacy/Effectiveness, Adoption, Implementation, Maintenance [RE-AIM]) to evaluate the relevant outcomes.10

To ensure that feasibility studies indeed reflect the realities of community and practice settings, it is essential that practitioners and community members be involved in meaningful ways in conceptualizing and designing feasibility research. Adhering to published principles of community-based participatory research11,12 should help in this regard, with the added benefit of helping to determine whether interventions are truly acceptable to their intended audience.

Design Options for Feasibility Studies

The choice of an optimal research design depends upon the selected area of focus. This premise holds equally for feasibility studies and for other kinds of research. As the knowledge base and needs for an intervention progress, different questions come to the fore. In the initial phase of developing an intervention, Can it work? is usually the main question. Given some evidence that a treatment might work, the next question is generally Does it work?, and does it do so under ideal or actual conditions compared to other practices. Those are the questions addressed by efficacy and effectiveness studies. Finally, given evidence that an intervention is efficacious and effective, the question Will it work? is applied to the myriad contexts, settings, and cultures that might translate the intervention into practice. Table 2 outlines possible intervention designs according to the focus of the performed feasibility study.

Intervention development phase
 
  Can it work? Does it work? Will it work?
  Is there some evidence that X might work? Is there some evidence that X might be efficacious under ideal or actual conditions, compared to whatever other practices might be done istead? Will it be effective in real-life contexts, settings, and cultures/populations that might adopt the intervention as practice?
Area of focus      

Acceptability Focus groups with target population participants to understand how this intervention would fit with daily-life activities An RCT to compare the satisfaction of the intervention group to that of a control group that did not receive the intervention A populatio n based survey before, during, and after implementation of a policy intervention
Demand Survey to determine whether people in the target population would use the intervention to guide their behavioral choices Pre–post design to compare the frequency of use and patterns of use across different populations Post-only design with multiple surveys over time to test reactions to the intervention in a new population
Implementation Pre–post design to evaluate whether the intervention can be deployed in any clinical or community context, using focus groups as the method of evaluation Pre–post design to evaluate small- scale demonstration project to test whether the intervention can be deployed in any clinical or community context; using both surveys and observations to compare practices and outcomes before and after intervention implementation Pre–post design to evaluate small-scale demonstration project testing whether the intervention can be deployed in target clinical or community context, using both surveys and observations
Practicality Small-scale demonstration study to examine predicted cost, burden, and benefit because of appropriate intensity, frequency, duration of the intervention, using key- informant interviews to gather data Cost-effectiveness analysis and community leader or other stakeholder interviews to determine how easily the intervention was used by their staff Cost analyses and matching interviews with providers to identify potential areas during implementation
Adaptation Quasi-experimental design using pre- and post-surveys to examine the effects of a previously adapted intervention in communities Small-scale experiment to examine whether an effective intervention continues to show evidence of efficacy once modified and implemented in a practice context Small-scale experiment testing appropriate intensity, frequency, and duration of the modified intervention, or intervention for the new target population
Integration Pre–post design to observe the extent to which people in the target setting are using the new intervention activities and with what costs and benefits to their other responsibilities Prospective longitudinal study to identify the sustainability of a recently tested package of intervention activities Annual monitoring of important systems to measure outcomes across years
Expansion Quasi-experimental, pre–post design using interviews with key informants to determine how well an expanded version of an intervention is perceived to work after implementation Uncontrolled pre–post study to test new, enhanced version of a previously tested intervention Continued monitoring to identify any decay of intervention effects after implementation
Limited efficacy Case-control design examining retrospectively whether better outcome is associated with being exposed versus not being exposed to the intervention Small-scale experiment examining whether the intervention can be delivered in any setting and yield trends in the predicted direction for better outcome compared to usual practice Meta-analysis of reports of subgroup effects in published trials of the intervention (looking for treatment by subgroup interaction; no evidence of interaction suggests no differential treatment effect)
Sample study designs: phases of intervention development by area of focus

Can It Work?

A variety of different research designs can address appropriately the Can it work? question. Sometimes the idea for an intervention derives from observations of actual practice. A practice-derived treatment hypothesis may be able to be refined efficiently by conducting a case-control feasibility study. Such a study might examine retrospectively whether better outcomes are associated with being exposed versus not being exposed to a tobacco policy. Or the same question might be addressed prospectively via a cohort study. A cohort feasibility study would follow and compare the outcomes of individuals who did or did not hear about the policy. The advantage of the cohort design, compared to the case-control design, is that it establishes the timing and directionality of effects. The disadvantage is that the need for follow-up means that cohort studies take longer to complete. Compared to an RCT, the cohort study’s main disadvantage is that participants are not assigned randomly to treatment. Thus, their outcomes may differ not because of the intervention but because the participants or their circumstances were inherently different from the outset.

Practice-derived research hypotheses are sometimes described as originating trench to bench. The other major pipeline of intervention development proceeds bench to trench by deriving hypotheses about active intervention mechanisms from basic research. Often the study involves a laboratory context that mimics or is analogous to the treatment context. For example, messages may be seen on a computer screen rather than on the ultimately intended billboard. Stated intentions to seek cancer screening may be the outcome instead of the actual performance of screening behaviors.

The drawback of experimental feasibility studies is that they have relatively limited external validity. On balance, however, they have two great advantages. First, experiments permit random or unbiased assignment to intervention conditions. Therefore, some comparison to an unbiased control from the same population is available. Second, experiments afford a very time- and cost-effective means of testing whether an intervention could work. It is the authors’ opinion that the experiment is a vastly underutilized research design for feasibility studies. Small-scale experiments that more closely approximate the clinical or community context of an RCT can also be used to test other aspects of intervention feasibility. Questions about safety; optimal dose (treatment intensity, frequency, duration); and the sequencing of treatment all can be tested efficiently in experiments before the launching of a full-scale clinical trial.

Does It Work?

Eventually preliminary positive results can suggest that an intervention is ready to be tested in a full-scale trial whose results should influence health practice. At that juncture, a variety of new feasibility questions must be addressed. One concern is whether the outcome can be measured reliably and validly. Psychometric studies of test-instrument development and validation could be the kind of feasibility research needed to address that question. In-depth qualitative assessments may be an asset to measure development. A second question is whether the intervention can be clarified and conveyed in a disseminatable format (e.g., a manual or brochure) that permits replication of the treatment.

A major feasibility issue that precedes the mounting of a full evaluation trial is the need to derive an effect-size estimate for the treatment. A small-scale randomized trial that mirrors the intended efficacy study may be valuable here. Such feasibility studies are sometimes called Phase-I or Phase-II clinical trials. Usually the design is an RCT because that study design affords the greatest internal validity (i.e., it maximizes confidence that changes in outcomes can be attributed causally to the treatment). Typically, the Phase-I or -II trial entails a smaller sample size than a full Phase-III efficacy/effectiveness trial. Earlier-phase trials are used, in part, to estimate effect size, power, and sample size for a full Phase-III trial.

Will It Work?

Ideally, a treatment will have been shown to be both efficacious and effective before being implemented broadly. New feasibility questions now arise, as interest shifts to disseminating and implementing broadly the intervention in diverse practice systems. It becomes critical to understand the perspectives of different stakeholders who will affect and be affected by the revised intervention. Those stakeholders form a system whose gears must mesh smoothly for the intervention to be taken up and integrated into practice. Qualitative research methods offer especially useful tools for understanding institutional and community cultures.13

Other kinds of feasibility questions that may be salient at the dissemination or implementation stage concern the potential extrapolation of the intervention beyond the populations and modalities in which it was studied originally. A frequent feasibility question is whether the treatment can be used for a new demographic subgroup—new in terms of ethnicity, culture, SES, geography, or ethnicity. That question often incorporates two sub-questions. One is whether the treatment will be found acceptable to the new population—a feasibility question best approached through qualitative research. The other sub-question asks whether the treatment retains its efficacy in the new population, in new settings, or with new health outcomes. Sometimes a completely distinct and unintended treatment or intervention emerges from such initial feasibility research and warrants additional study.

A final and commonly posed feasibility question is whether a new treatment-delivery channel or intervention method will work. For instance, relevant questions can concern whether the intervention is able to be delivered in group versus individual format, over the telephone instead of face-to-face, or in web- or PDA-based formats. There may be questions about whether paraprofessionals or peers or a computer can deliver the intervention as intended. Usually, these feasibility questions and others will be addressed initially through qualitative interviewing and surveys, followed by experimentation.

Conclusion

This article identifies the construct feasibility as a series of questions and methods. For an intervention to be worthy of testing for efficacy, it must address the relevant questions within feasibility. It is also important to discard or modify those interventions that do not seem to be feasible as a result of data collected during the feasibility-study phase. Using feasibility research in the intervention-research process as a determinant for accepting or discarding an intervention approach is a key way to advance only those interventions that are worth testing (i.e., have a high probability of efficacy).

Scientists who propose feasibility studies are encouraged to do so while keeping in mind the research questions outlined in this article. As with any research, an investigator should choose the area of focus that best matches the needs of the situation. Methodologies to address each area may vary and can be creatively combined to form a package appropriate to the setting, community, or population under study. Reviewers of grants, as well as investigators and grants officials, will also want to pay attention to the varied areas of focus that fall under the umbrella of feasibility. Smaller studies with mixed methods might yield more innovative feasibility results.

Download the file : nihms179637.pdf (43.5 KB) Find it online