Designing Your FBA


There is no one single standardized and accepted format for an FBA. Instead, depending on one's training program, preferred textbooks, local norms, accumulated resources, and practical experiences, specialists who conduct FBAs do so in a variety of ways. To our knowledge, no overarching set of procedures in non-experimental functional behavior assessment has been empirically demonstrated to result in superior decision-making. As a result, and as anyone who has worked in behavior support can tell you, FBAs can look a lot of different ways.

Given the disparate state of the field, we describe here a set of practices and procedures which seem to encompass generally accepted methods in FBA. We want to heavily emphasize that as the specialist, you are, and you must be, the professional who brings your own understanding of data-based decision-making and applied behavior analytic principles to the FBA process.

FBA is about using non-experimental methods to hypothesize the function of a student's behavior. How might we do that? First, we identify and prioritize target behaviors of concern. We then select and define the behavior (or class of behaviors) that we will target for this FBA. We next identify the information we wish to accumulate in order to understand the contingencies which surround this behavior. When does it happen? When doesn't it happen? What usually happens before? What usually happens after? What is going on in the environment which could occasion and perpetuate this behavior?

We pose these assessment questions, identify and implement assessment procedures to answer these questions, and engage in continual data analysis to determine whether the questions have been answered. When a reliable pattern of behavior is identified, and we have confidence in the data that we have collected, we then formulate a functional hypothesis, which will drive subsequent decision-making in intervention design and behavior support.

Data-Based Decision-Making

We would argue that the least-effective conceptualization of an FBA is as a one-off report-generating assessment. Someone asks for an FBA for a student, the specialist comes in and does one, they write and present the report, and then proceed to the next case.

We would strongly encourage all behavior support professionals and specialists to conceptualize FBA as one part of an ongoing problem-solving framework, one that is circular in process and additive from one step to another. This is in contrast to a model of FBA which considers the process as piecemeal: problem identified, FBA report written, intervention plan developed and implemented.

This might sound like just dressing up the same process in different words, but we would argue that understanding FBA as one part of a larger data-based decision-making process should fundamentally inform the way in which we conduct and utilize procedures within that FBA.

What Should Always Be Included in an FBA?

We believe that a quality FBA should always (and we mean always) include (1) a record review, (2) interviews with at least two people who know the student and have seen the behavior occur on multiple occasions, and (3) multiple direct, in-person observations conducted by the behavior support professional who is in charge of the FBA.

Why do we advocate for these three assessment methods in every FBA? First, let's go back to the assessment question that FBAs are designed to answer: "why might this student be engaging in this specific target behavior?" FBA is a method that is anchored within the principles of applied behavior analysis, so in FBA we bring behavioral theory into that assessment question by conceptualizing it through ideas of antecedents and consequences.

Applied behavior analysis would argue that people engage in behavior due to histories of reinforcement and punishment that followed behaviors, and associations with antecedents that occurred prior to the behavior to signal whether reinforcement is available for that behavior. There's more to it than that of course, but to somewhat simply summarize, we come to any assessment question with at least one theoretical frame, and we then use that frame to determine what data we need to collect in order to answer that assessment question.

Since, within applied behavior analysis, we consider observable behavior to be a function of that learning history, we need to gather data that will shed light on that history.

1. Record Review:

We use record reviews to better understand a student's learning history because record reviews can provide us with information about how people have responded to the student's behavior in the past. Does the student have a history of suspensions for certain behaviors? Are certain times of year typically more problematic than others, and if so, what usually occurs or doesn't occur during those times that might be interacting with the student's behavior? What interventions have been tried in the past, and what were their outcomes? Does this student have any prior FBAs or Behavior Intervention Plans (BIPs) in their file? If so, how were they conducted and what did they suggest?

2. Interviews:

We use interviews to get information from the people who see this behavior the most. Where and when does the behavior usually happen? When and where does it never happen? If you had to make the behavior happen right now, what would you do? If you needed to make sure the behavior never took place, what would you do?

We prefer to conduct our interviews as the first step in the FBA process, sometimes before record reviews, but always before we really start digging into our direct observations. Maybe we'll observe the student once or twice before an interview if the opportunity presents itself, but we want to use our interviews to help us make the most of our limited assessment time.

Interviews can help us understand when to observe the student, what's been tried in the past, what's currently going on in the classroom and other educational settings, and what we might want to do for intervention moving forward. Interviews are also critical because, if this is the first time we're meaningfully interacting with this student's teacher, paraprofessional, or parent, or the student themselves, then we need to make this an experience for that person which emphasizes collaboration and co-creation. The vast majority of the time, these are the people who will be implementing the eventual behavior plan. They need to feel included, valued, and respected if they're going to be asked to change their behavior; let's not forget that, ultimately, a behavior plan is about adult behavior change in service of student behavior change.

3. Direct observation:

We insist that an FBA include direct observation by the person(s) conducting the FBA. The FBA process is rooted in applied behavior analysis, which is entirely focused upon the observability of behavior. We would argue that, if the term FBA is to mean anything real about a cohesive set of practices for working with kids, direct observation must be part of that process; indeed, it must be the central data-collection mechanism for that process.

When it comes to the number of observations that should be done, there isn't any clear cut-off for "enough." You should do the number of observations you need to do in order to generate a defensible functional hypothesis. Remember that you're making a claim, and you need to be able to defend that claim to others. How will you do so? When other members of the student's support team ask you to describe what you learned and why you think what you do, what will you tell them? If you've only got one or two observations to support your hypothesis, then you should probably have an extremely strong rationale as to why those two observations can represent the overall function of the student's behavior (or, better yet, don't only do two observations).

What Might We Also Include in an FBA?

One thing we haven't discussed is rating scales. There are two main types of rating scales that you might see in an FBA: (1) norm-referenced rating scales, also referred to as broadband or narrowband rating scales, and (2) function-based rating scales.

1. Norm-referenced rating scales:

Many people integrate norm-referenced rating scales like the BASC-3, SSIS, Conners 3, TRF, or CBCL into their FBAs. Norm-referenced rating scales like these are able to respond to a very specific assessment question: "how is this student performing when compared to other students of their age?"

This is something of a simplification, since norms can also incorporate things like gender identity or clinical status, and it's always important to consider that these ratings are retrospective and can incorporate others' perspectives into the process. However, at their most basic, the standard scores or T-scores that you get from a rating scale say one specific thing: is this student like other students on this specific characteristic?

How would this inform an FBA? Well, these data could help determine if a students' functioning is indeed distinct from the larger norming group. They could also help identify other sources of potential concern that may bear upon a student's behavior, like social skills or internalizing issues. In any case, however these data are used (if they're used at all), it's very important to keep in mind that these rating scales do not say anything about function.

2. Function-based rating scales:

Scales like the FAST or MAS typically work by presenting a list of questions regarding whether a student engages in the target behavior in specific contexts. The list of questions is composed of items that are aligned with different functions of student behavior. The answers are then tallied, and the function that was most highly endorsed is to be considered as one potential function.

We would urge individuals who use such rating scales to consider three important points. First, these rating scales should never be used as a replacement for direct observation. The crux of ABA is that it pertains to observable behavior, and that the reason behavior takes place is due to its history of antecedents and consequences. We must therefore understand that history in order to understand behavior. Asking a function-based rating scale to do that work for you is, in our professional judgment, inappropriate.

Second, we would encourage users of these rating scales to use them as tools for further conversation with stakeholders. Just like we use semi-structured interview protocols like the FAI or FACTS to guide our conversations, we could use these rating scales in a similar way.

Third, please be cognizant of how much weight is given to the results of these scales when they are used. It's very tempting to afford a large amount of weight in decision-making to the numbers from these scales; they're clean, easy to interpret, and seemingly robust. However, they are just one data point to consider alongside your interviews, observations, and other assessment methods.

Content on Designing Your FBA was compiled by PENT Content Consultant, Dr. Austin Johnson.