JMIR Neurotechneuro42JMIR NeurotechnologyJMIR Neurotech2817-092XJMIR PublicationsToronto, Canadav3i1e5766110.2196/57661Original PaperInvestigation of Study Procedures to Estimate Sensitivity and Reliability of a Virtual Physical Assessment Developed for Workplace Concussions: Method-Comparison Feasibility StudyBarnesKeelyMHK123SveistrupHeidiPhD1245BayleyMarkMD67EganMaryPhD12BilodeauMartinPhD124RathboneMichelMD, PhD8TaljaardMonicaPhD39KarimijashniMotaharehMSc13MarshallShawnMSc, MD2310School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, 200 Lees Ave.OttawaON, CanadaBruyère Research Institute, OttawaON, CanadaAcute Care Program, Ottawa Hospital Research Institute, OttawaON, CanadaSchool of Human Kinetics, Faculty of Health Sciences, University of Ottawa, OttawaON, CanadaSystems and Computer Engineering Technology, Carleton University, OttawaON, CanadaKite Research Institute, Toronto Rehabilitation Institute, University Health Network, TorontoON, CanadaDivision of Physical Medicine and Rehabilitation, Temerty Faculty of Medicine, University of Toronto, TorontoON, CanadaDepartment of Medicine, Division of Neurology, Faculty of Health Sciences, McMaster University, HamiltonON, CanadaSchool of Epidemiology and Public Health, University of Ottawa, OttawaON, CanadaDepartment of Medicine, University of Ottawa, OttawaON, CanadaKubbenPieterPanahiAliCorrespondence to Keely Barnes, MHK, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, 200 Lees Ave., Ottawa, ON, K1N 6N5, Canada, 1 613-612-6127; [email protected]2024271120243e57661200620241809202425092024© Keely Barnes, Heidi Sveistrup, Mark Bayley, Mary Egan, Martin Bilodeau, Michel Rathbone, Monica Taljaard, Motahareh Karimijashni, Shawn Marshall. Originally published in JMIR Neurotechnology (https://neuro.jmir.org), 27.11.2024. 2024

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Neurotechnology, is properly cited. The complete bibliographic information, a link to the original publication on https://neuro.jmir.org, as well as this copyright and license information must be included.

Background

Remote approaches to workplace concussion assessment have demonstrated value to end users. The feasibility of administering physical concussion assessment measures in a remote context has been minimally explored, and there is limited information on important psychometric properties of physical assessment measures used in remote contexts.

Objective

The objectives of this feasibility study were to determine recruitment capability for a future larger-scale study aimed at determining sensitivity and reliability of the remote assessment, time required to complete study assessments, and acceptability of remote assessment to people with brain injuries and clinicians; document preliminary results of the sensitivity of the remote assessment when compared to the in-person assessment; and estimate the preliminary interrater and intrarater reliability of the remote assessments to inform procedures of a future larger-scale study that is adequately powered to reliably estimate these parameters of interest.

Methods

People living with acquired brain injury attended 2 assessments (1 in-person and 1 remote) in a randomized order. The measures administered in these assessments included the finger-to-nose test; balance testing; and the Vestibular/Ocular Motor Screening (VOMS) tool, including documentation of change in symptoms and distance for near point convergence, saccades, cervical spine range of motion, and evaluation of effort. Both assessments occurred at the Ottawa Hospital Rehabilitation Center. After the assessments, a clinician different from the person who completed the original assessments then viewed and documented findings independently on the recordings of the remote assessment. The same second clinician viewed the recording again approximately 1 month following the initial observation.

Results

The rate of recruitment was 61% (20/33) of people approached, with a total of 20 patient-participants included in the feasibility study. A total of 3 clinicians participated as assessors. The length of time required to complete the in-person and remote assessment procedures averaged 9 and 13 minutes, respectively. The majority of clinicians and patient-participants agreed or strongly agreed that they were confident in the findings on both in-person and remote assessments. Feedback obtained revolved around technology (eg, screen size), lighting, and fatigue of participants in the second assessment. Preliminary estimates of sensitivity of the remote assessment ranged from poor (finger-to-nose testing: 0.0) to excellent (near point convergence: 1.0). Preliminary estimates of reliability of the remote assessment ranged from poor (balance testing, saccades, and range of motion: κ=0.38‐0.49) to excellent (VOMS change in symptoms: κ=1.0).

Conclusions

The results of this feasibility study indicate that our study procedures are feasible and acceptable to participants. Certain measures show promising psychometric properties (reliability and sensitivity); however, wide CIs due to the small sample size limit the ability to draw definitive conclusions. A planned follow-up study will expand on this work and include a sufficiently large sample to estimate these important properties with acceptable precision.

International Registered Report Identifier (IRRID)

RR2-10.2196/57663

brain injuryvirtualassessmentremoteevaluationconcussionadultclinician reviewin-personcomparisonsensitivityreliabilityacceptability surveyfeasibility studypsychometric propertiesvestibular/ocular motor screeningVOMSworkplaceclinicianhospitalrehabilitation centerbrainneurologyneuroscienceneurotechnologytechnologydigital interventiondigital healthpsychometricsphysical assessmentclinical assessmentworkplace safetymobile phone
Introduction

Workplace concussions impose a significant burden on the health care system, insurance providers, employers, and injured workers [1]. Conducting effective assessments after a workplace concussion is important for guiding intervention and facilitating recovery [2]. In this context, it is crucial to use measures that accurately capture deficits experienced by workers who are reporting persisting symptoms post concussion [3-5]. Ideally, an assessment should be comprehensive, consisting of measures that evaluate all domains, including physical (ie, cervical musculoskeletal and vestibulo-oculomotor function), cognitive, and socioemotional, that can be impaired by a concussion [6-8]. Furthermore, measures included in an assessment should be reliable, valid, sensitive, and possess clinical use characteristics (practical, timely, low cost, minimal equipment, etc) [3,9,10]. However, clinical concussion measures that excel in all these characteristics are limited.

Workplace concussions occur in both rural (1400 per 100,000 people [11]) and urban populations. Remote care offers an alternate approach to assessment that could be particularly beneficial for individuals living in rural areas who may experience challenges with attending in-person assessments allowing them to connect to experts that they may not have been able to access previously [12]. Remote approaches to concussion assessment have demonstrated value to end users (clinicians and patients); however, the accuracy of the findings from remote assessments has yet to be explored [13,14]. The lack of documented reliability and sensitivity properties associated with measures administered in remote assessments of people with concussions poses a barrier to the ongoing use and availability of these remote assessments in practice. Further barriers to completing remote assessments include concerns regarding safety, environmental setup, and the need for support at home [14-17]. While there are clear challenges to completing assessments, particularly physical assessments, at a distance, there are also many identified benefits [13,14].

A scoping review by O’Neil et al [18] reported a notable gap in the documentation of psychometric properties of measures administered in a remote context in people living with neurological conditions, including concussion. Agreement between in-person and remote assessments has been explored for musculoskeletal measures [19,20] and minimally explored in people living with neurological conditions [21]. Palacin-Marin et al [19] provide preliminary support for the use of remote means to administer certain measures related to low back pain such as evaluation of mobility and the visual analog scale for pain. Specifically, intraclass correlation coefficient (ICC) values for interrater and intrarater reliability ranged between 0.92 and 0.96 and the α reliability statistic was greater than .8 for the majority of measures when the remote approach was compared to the in-person approach [19]. Similarly, Cabana et al [20] examined the reliability of measures used for total knee arthroplasty. Measures, including knee range of motion, scar condition, knee joint swelling, lower limb muscle strength, timed up and go test, Tinetti gait test, and Berg balance test, evaluated using a videoconferencing platform, demonstrated good reliability (>0.80); however, levels of agreement ranged between −33% and 29% for measures of function and −20% and 8% for measures of knee range of motion [20]. Russell et al [21] compared in-person and remote physical measures for people living with Parkinson disease and reported that the physical assessment could be feasibly, reliably, and accurately completed using a telerehabilitation system. This previous work highlights a need and a value to further explore the psychometric properties of the clinical measures administered remotely for individuals with concussion.

With the COVID-19 pandemic, many clinicians have integrated remote physical assessments into their practice. Yet, there is limited research evaluating the sensitivity and reliability of the remote concussion assessment. Using a systematic approach, including focus groups [14] and working group and expert consensus [22,23], we developed a clinical assessment for individuals with a concussion. The assessment includes measures of balance, cervical spine mobility, coordination, and vestibular or ocular movements [14,22,23]. As an essential first step, prior to embarking on a full-scale study, we conducted a feasibility study. The primary feasibility objectives were to inform the methodology for a large-scale study. The feasibility measures include the rate of recruitment of both patient- and clinician-participants (willingness to participate to ensure the capability of recruiting enough participants for the large-scale study), feedback regarding the assessment approaches in the study, and preliminary information on 3 psychometric properties of the measures included in the remote assessment (interrater and intrarater reliability [24] and sensitivity [25] when administered remotely compared to in person). While this study was not designed to estimate the psychometric properties of the measures with sufficient precision, it provides some preliminary information on these metrics [26]. The study design and analysis considerations for this study were informed by Russell et al [27].

MethodsEthical Considerations

Ethics approval was obtained from the Ottawa Health Sciences Network Research Ethics Board (20230311‐01H) in June 2023, followed by the Bruyère Research Ethics Board (M16-22-006) and the University of Ottawa Board of Ethics (H-06-23-9348) in June and July 2023, respectively. Patient-participants verbally consented over the telephone or provided informed consent in person. Privacy and confidentiality were maintained. Participants were provided with a parking voucher and CAD $30 (US $22) gift card following completion of participation in the study.

Participants and RecruitmentPatient-Participants

People living with acquired brain injuries (ABIs) or concussions were recruited from ABI outpatient clinics publicly funded or Worker Safety and Insurance Board (WSIB) clinics based out of the Ottawa Hospital Rehabilitation Centre (TOHRC). Inpatients from the TOHRC ABI inpatient rehabilitation service were also recruited.

Eligible participants were adults aged 18 years or older who were attending a scheduled outpatient assessment or who were admitted to the ward and were under the care of one of our study clinicians. In addition to people living with concussion, people living with other forms of ABI (eg, moderate to severe traumatic brain injury and hypoxia [28]) were recruited for this study. Given that normal findings are frequently observed on the neurological examination in people with a concussion [29,30], the broader inclusion ensured that participants with identifiable positive findings on the neurological examination were represented in the sample. Participants unable to speak English or French and unable to complete both the in-person and remote assessment procedures were excluded.

The electronic medical records of patients attending in-person appointments and the list of patients admitted to the ward were screened to identify eligible individuals who were then approached via telephone (outpatients) or face-to-face (inpatients) to discuss possible participation in the study.

No sample size calculation was conducted for the feasibility study. Instead, we planned to recruit participants for a period of 5 months based on logistical considerations. We plan to include the data from the feasibility study in the future larger-scale study, assuming the protocol does not need to be modified between the feasibility and definitive study.

Clinician-Participants

Clinicians who were employed at TOHRC and were actively completing ABI assessments were recruited to participate. Assessors were responsible for completing the study assessments and observing and rating recordings of completed assessments. Eligible clinicians were approached via telephone and consented over the telephone.

Training

The Virtual Concussion Exam Manual [31] was adapted and reviewed by all clinicians prior to their participation in the study. The manual contains clear instructions on administering clinical measures remotely. Clinicians were encouraged to consult the adapted manual for guidance while conducting study assessments.

In-Person and Remote Assessments

Each patient-participant completed both the in-person and remote assessment at TOHRC on the same day. The assessments were conducted during a scheduled in-person appointment for outpatients, or a specific appointment set up for inpatients. The 2 assessments were separated by a brief rest period [32] with additional rest time provided until the participant could complete the remainder of the measures.

The order of the assessments (in-person and remote) was randomized and counter-balanced to ensure an equal number of participants completed the in-person and remote assessments first. Randomization occurred through REDCap (Research Electronic Data Capture; Vanderbilt University) using a random numbers table. The rationale was to equalize the influence of fatigue and learning effects on the subsequent assessment. The assessments consisted of the same clinical measures (Table 1).

For the remote assessment, the patient-participant was in a separate room from the assessor. All patient-participants used the same computer and received technical support from a research team member if needed. Safety precautions during the remote assessment included (1) the presence of a research team member in the room throughout the remote assessments and (2) positioning patient-participants in front of a wall, chair, or bed during balance testing. All remote assessments used a Dell Vostro 3520 Laptop (with a 15.6-inch display) running the Microsoft Teams platform (for the patient-participants). Assessors used desktop computers. The remote assessments were audio-video recorded for later review and evaluation by a different clinician. For the in-person assessment, a research team member was present in the room to record the assessment using the same laptop as the remote assessment. Safety precautions were provided by the assessing clinician.

Outline of measures included in the remote assessment, clinical decision for each measure, and guideline for clinical decision.

MeasureClinical decisionGuideline for decisionReference
Finger-to-nose testNormal or abnormalAbnormality considered hesitation, tremor, and undershooting or overshooting[33]
Balance testing (feet together, single leg stance, and tandem stance) for 20 seconds with eyes open and closedNormal or abnormalAbnormality considered inability to hold the position for 20 seconds[31]
SaccadesNormal or abnormalAbnormality determined by saccade speed, accuracy, initiation, intrusions or oscillations, and range of motion and conjugacy[34]
VOMSa—change in symptomsChanges in symptoms documented following completion of each componentChange greater than or equal to 2 points out of 10 was considered abnormal[35]
VOMS—near point convergenceDistance documentedA distance greater than or equal to 5 cm was considered abnormal[35]
Cervical spine range of motionEstimated angles for cervical flexion, extension, lateral flexion, and rotation recordedValues were compared to pooled norms for healthy individuals aged 20 to 59 years old documented in a recent systematic review to identify abnormality: flexion=50‐72°, extension=58‐77°, lateral flexion=37‐47°, and rotation=67‐81°[36]

aVOMS: Vestibular/Ocular Motor Screening.

Observation of Recordings of Assessments

Two assessors (rater A and rater B) documented clinical findings for each patient-participant. Rater A completed the in-person and synchronous remote assessments. Rater B documented findings asynchronously from audio-video recordings of the remote assessment at 2 time points separated by approximately 1 month. Figure 1 outlines the study assessment procedures.

Assessment procedures modified from Russell et al [27].

Feedback

Feedback was obtained from both clinician- and patient-participants using a feedback form, following completion of the assessments. Participants answered questions related to the environmental setup of the remote assessment, confidence in the assessment findings, perceived similarity between the 2 approaches, and provided any additional feedback on the form.

AnalysisRate of Recruitment and Time Required to Complete Study Procedures

Descriptive data are provided for an average number of participants recruited per month, rate of recruitment, and average and range of time required for in-person and remote assessments separately.

Participant Characteristics, Feedback, and Confidence Ratings

Participant characteristics (sex, age, injury information, etc) were analyzed descriptively. Feedback obtained following the completion of study assessments and perceived confidence ratings in findings on study assessments for both clinician- and patient-participants were summarized narratively.

Reliability and Sensitivity

Quantitative data analyses were completed using IBM SPSS (version 28, IBM Corp). Interrater reliability was determined by comparing the results documented by rater A (remote assessment) with those documented by rater B (remote assessment at time 1). Intrarater reliability was determined by comparing the results from rater B at time 1 versus time 2. All measures were coded into binary categories (abnormal vs normal; see Table 1). Some measures had single items (Vestibular/Ocular Motor Screening [VOMS] change in symptoms, VOMS near point convergence, and saccades), whereas some measures (balance testing, cervical spine range of motion, and finger-to-nose testing) included multiple items where each was rated as 0 or 1 and results were summed. For both interrater and intrarater reliability, unweighted κ statistics were calculated. κ is calculated by dividing the difference between the observed agreement (proportion of times raters agreed) and expected agreement by chance (agreement that would occur by random chance) by 1 minus the expected agreement by chance. κ values between 0 and 1.0 were documented where values closer to 0 indicate poor reliability and values closer to 1.0 indicate perfect agreement [37]. The 95% CIs were calculated manually by multiplying the standard errors obtained from SPSS by the z score statistic 1.96 and adding and subtracting that value to or from the κ values to obtain the upper and lower bounds [38].

The sensitivity of the remote assessment was determined by comparing the results documented by the clinician who observed and rated the recording of the remote assessment (rater B at time 1) with the results documented by the clinician conducting the in-person assessment (rater A, reference standard) [27]. The value reflects the proportion of people identified as impaired on the in-person measure who were identified as impaired on the remote measure [25]. Sensitivity is calculated by dividing the true positives (identified correctly as impaired on the in-person and remote assessments) by the sum of the true positives and false negatives (the remote assessment failed to identify as impaired, but the in-person assessment correctly identified as impaired). The 95% CIs were calculated using the online VassarStats Clinical Calculator [39].

Adverse Events

Adverse events, including the severity and type, were monitored. Significant worsening of symptoms requiring medical attention, such as an emergency department visit or a new injury, was considered an adverse event [40].

ResultsParticipant Characteristics

A total of 20 patient-participants completed both the in-person and remote assessments (see Table 2). In it, 15 (75%) participants were female. Most participants were working at the time of their assessment. Most participants reported limitations in functional abilities and perceived their mental health as fair to poor. When considering the criteria outlined regarding the identification of abnormality on each measure, 18 out of the 20 participants were abnormal on at least 1 of the measures in the in-person assessment.

The injury characteristics of the 20 patient-participants are presented in Table 3. A total of 14 (70%) participants sustained a concussion, and the remainder sustained a moderate to severe traumatic brain injury or other form of ABI, with most injuries occurring outside of the workplace context.

Most participants used technology daily and the majority rarely needed assistance when using technology (Table 4).

Three clinicians participated as assessors. Two assessors (male and female), acting as rater A, were a physiatrist and a physician assistant. The third clinician (a male), acting as rater B, was a physiatrist and observed the recordings of the assessments. These clinicians typically assess more than 50 patients with ABI annually and reported feeling confident in their ability to complete the in-person and remote assessments.

Demographic characteristics (N=20).

Values
Age (years), range21‐58
Sex, n (%)
Female15 (75)
Male5 (25)
Gender, n (%)
Woman15 (75)
Man5 (25)
Ethnicity, n (%)
White15 (75)
Black1 (5)
Arab1 (5)
Southeast Asian (eg, Vietnamese, Cambodian, Malaysian, and Laotian)1 (5)
West Asian (eg, Iranian and Afghan)1 (5)
First Nation or Indigenous1 (5)
Highest educational attainment, n (%)
Secondary (high) school diploma or equivalent8 (40)
Postsecondary certificate, diploma, or degree12 (60)
Current work status, n (%)
Off work7 (35)
Modified return to work, same preinjury occupation6 (30)
Full return to work, same preinjury occupation4 (20)
Full return to work, different occupation2 (10)
First time working1 (5)
Functional limitations, n (%)
 Moderate activities
Yes, limited a lot3 (15)
Yes, limited a little10 (50)
No, not limited at all7 (35)
Climbing stairs
Yes, limited a lot3 (15)
Yes, limited a little8 (40)
No, not limited at all9 (45)
Perceived mental health, n (%)
Excellent or very good2 (20)
Good6 (30)
Fair or poor12 (60)

Injury information (N=20).

CharacteristicsValues
Diagnosis, n (%)
Other acquired brain injury6 (30)
Mild traumatic brain injury or concussion14 (70)
Mechanism of injury, n (%)
Work6 (30)
Motor vehicle accident4 (20)
Assault2 (10)
Fall or hit head at home4 (20)
Sport2 (10)
Poisoning1 (5)
Encephalitis1 (5)
Date of injury, n (%)
<6 months ago3 (15)
6 months to <1 year ago3 (15)
1 to <2 years ago7 (35)
2 to <3 years ago3 (15)
>3 years ago4 (20)

Remote assessment and technology experience (N=20).

CharacteristicsValues
Previously attended remote assessment, n (%)
Yes11 (55)
No9 (45)
If yes, number attended, n (%)
<58 (40)
5‐101 (5)
>101 (5)
Unsure1 (5)
Distance located from the rehabilitation center, n (%)
<30 minutes9 (45)
30‐60 minutes8 (40)
>60 minutes2 (10)
N/Aa—no home1 (5)
Technology available for remote assessment, n (%)
Computer2 (10)
Laptop8 (40)
Smartphone2 (10)
Multiple devices (iPad, smartphone, and computer)7 (35)
None1 (5)
Use of technology, n (%)
Weekly3 (15)
Daily17 (85)
Assistance needed during use of technology, n (%)
Never7 (35)
Rarely9 (45)
Monthly2 (10)
Weekly1 (5)
Daily1 (5)

aN/A: not applicable.

Rate of Recruitment

We experienced challenges recruiting clinicians willing to conduct the study assessments, and clinicians’ scheduling conflicts posed difficulties with patient-participant recruitment; therefore, the involvement of additional professionals, such as physiotherapists, will be needed to support the large-scale study. We recruited, on average, 1 patient-participant per week at TOHRC. A total of 38 potential participants were identified. In it, 7 could not be reached. Of the potential participants reached, 6 (23%) declined, primarily due to concerns that the multiple assessments would make their symptoms worse. Our rate of recruitment was, therefore, 20/33 (61%) of people approached and 20/26 (77%) of people reached. Given the recruitment rate, at 1 center, we anticipate being able to approach approximately 6 participants per month. With the anticipated recruitment rate of 61% over 5 months, we can feasibly recruit 20 participants, which would mean the future large study with a target sample size of 60 [41] would require 15 months to complete recruitment.

Length of Time Required to Complete Study Assessments

The time required to complete the in-person assessment ranged from 5 to 13 minutes and averaged 9 minutes. The time required to complete the remote assessment ranged from 7 to 26 minutes and averaged 13 minutes.

FeedbackClinician-ParticipantsPerceived Similarity of Remote and In-Person Assessments

The 2 assessors (raters A) believed that they obtained similar information from both the in-person and remote assessment in a majority 16/20 (80%) of cases. On 4 occasions, the assessors reported that the patient-participant had fatigue and experienced heightened symptoms during the second assessment; however, on 2 of these 4 occasions, the assessor still believed that comparable findings were obtained even with the exacerbation. For example, 1 clinician reported that the findings were “comparable, but patient fatigued and became more symptomatic,” and another reported an increase in symptoms during the second assessment.” On 2 occasions, the assessor reported that they did not believe similar information was obtained, as one of the patient-participants felt more comfortable in person and the other patient-participant was able to follow directions better in person, reporting that the participant “had an easier time with directions in-person.”

Challenges

One assessor expressed some issues with remote tests due to dark lighting in the assessment rooms when blinds were open, and patient-participants were positioned in front of windows, reporting “lighting in background, lost sight of pen.” Another assessor noted audio issues (“sound delay”) during the remote assessment with sound going in and out, forcing the assessor and patient-participant to repeat statements. Technical challenges related to internet connectivity were experienced on 2 occasions, with reports of “internet freezing in exam room” and “initial connection slow.

Patient-ParticipantsPerceived Similarity of Remote and In-Person Assessment Findings

Most (12/20, 60%) of the patient-participants believed that similar findings were obtained in both the in-person and remote assessments. One patient-participant was unsure about the similarity of findings, reporting “measurement wise, I can’t say.” Three patient-participants mentioned that ocular tests may have been easier for the clinician to observe in person. For example, 1 participant reported that “there might have been a difference in visual tests” and another reported that “I think eye movements are more easy to observe in-person.” One patient-participant reported that more relevant clinical data were obtained in person, 1 reported that their concussion symptoms were worse remotely due to screen exposure, 1 expressed doubt that the remote assessment would have been sufficient for their initial assessment, and 1 perceived that similar information was not obtained as the conversation was easier in person.

Challenges

Patient-participants expressed minimal concerns regarding the environmental setup of the remote assessment. One patient-participant reported that it would be helpful to see the whole body of the clinician for demonstrations of measures or to observe a photo of the measure on the screen beforehand. Another noted that while the setup was adequate, space was limited for one of the measures that required her to rotate her body. Technical support was required for all participants to manage camera angles during remote completion of balance testing. One participant highlighted the value of having someone physically present to troubleshoot any camera angle challenges. Two participants suggested that a larger screen size would be beneficial, with 1 reporting that “the screen was a bit small and needed to be adjusted to capture all my movements” and another reported that “a bit larger screen might be better for evaluation to see eye movements.” Finally, 4 patient-participants reported that the lighting was an issue for both the in-person and remote assessment due to light sensitivity associated with the concussion, whereas assessors commented on lighting in relation to the visibility of the patient-participant during the remote assessment. For example, 1 patient-participant reported that “lighting would be great on a dimmer” and another reported that the assessments were “a bit bright with the lights on.

Confidence Ratings

The assessors expressed high confidence in their findings on the in-person (20/20, 100%) and remote (19/20, 95%) assessments. Only on 1 occasion was an assessor “neutral” in terms of their confidence in their findings on the remote assessment.

Most patient-participants agreed or strongly agreed that they felt confident in their assessors’ findings on the in-person (19/20, 95%) and remote (15/20, 75%) assessments. Two patient-participants did not feel confident and 2 were “neutral” in their confidence levels on the remote assessment. One participant did not feel confident in both the in-person and remote assessments.

Preliminary Information on Sensitivity of the Remote Assessment Compared to the In-Person Assessment

The preliminary sensitivity of the remote compared to in-person administration of the measures ranged from 0.0 to 1.0 (Table 5). This suggests poor (finger-to-nose testing) to excellent (near point convergence) ability to detect deficits in the remote assessment when deficits are truly present (based on the reference standard, which is the in-person assessment). On 2 occasions, evaluation of saccades was not documented by the assessor in error. On 1 occasion, the VOMS was not completed due to the patient’s inability to complete the measure because of aggravation of symptoms. No abnormality was documented for effort on both the in-person and remote assessment and, therefore, sensitivity could not be computed.

Sensitivity of the remote assessment compared to the in-person assessment.

 Measure Values, nSensitivity (95% CI)
Cervical spine
 ROMa200.33 (0.09-0.69)
Vestibular
 Balance—eyes open and closed: feet together, single leg stance, and tandem stance200.94 (0.69-1.0)
 VOMSb—change in symptoms  190.92 (0.60-0.99)
 VOMS-NPCc191.0 (0.70-1.0)
Neurological examination
 Finger-to-nose   200.0 (0.0-0.60)
Oculomotor
 Saccades  180.50 (0.09-0.91)
Effort
 Optimal effort  20d

aROM: range of motion.

bVOMS: Vestibular/Ocular Motor Screening.

cNPC: near point convergence.

dStatistic could not be computed, as the values documented by both assessors are constant for this measure (all normal findings reported).

Preliminary Information on Interrater and Intrarater Reliability

Table 6 presents the preliminary information on the interrater and intrarater reliability of the measures when administered remotely. Cohen κ values for interrater reliability ranged from poor for balance testing (0.38), range of motion (0.47), and saccades (0.49) to excellent (1.0) for VOMS change in symptoms. The intrarater reliability ranged from poor (0.44) for the evaluation of saccades to very good (0.89) for VOMS change in symptoms.

Interrater and intrarater reliability of the measures when administered remotely.

  MeasureIntrarater reliability  Interrater reliability 
Value, nCohen κ (95% CI)Value, n  Cohen κ (95% CI)
Cervical spine
 ROMa200.69 (0.29 to 1.0)200.47 (0.04 to 0.90)
Vestibular
 Balance testing—eyes open and closed: feet together, single leg stance, and tandem stance200.61 (0.11 to 1.0)200.38 (−0.09 to 0.86)
 VOMSb—Change in symptoms  200.89 (0.67 to 1.0)201.0 (1.0 to 1.0)
 VOMS-NPCc   200.76 (0.47 to 1.0)200.67 (0.34 to 0.99)
Neurological examination
 Finger-to-nose   20d20
Oculomotor 
 Saccades 200.44 (−0.2 to 1.0)180.49 (0.04 to 0.94)
Effort 
 Optimal effort  2020

aROM: range of motion.

bVOMS: Vestibular/Ocular Motor Screening.

cNPC: near point convergence.

dStatistic could not be computed, as the values documented by the second assessor are constant for these measures (all normal findings reported).

Adverse Events

Most (17/20, 85%) participants reported an increase in preexisting brain injury symptoms, such as increased headaches, dizziness, and nausea, during the VOMS measure; however, this was true for both the in-person and remote assessments.

DiscussionPrincipal Findings

This is one of the first studies to examine the preliminary psychometric property information of concussion assessment using remote approaches. According to Montes et al [42], a systematic approach to remote assessment development is essential, with established in-person measures serving as a basis for comparison or as a reference standard. Taking this into consideration, we first carried out reviews of the psychometric properties of potential measures to help select the measures to be included in our remote assessment [14,22,23]. In this study, we report on the acceptance, feasibility, and preliminary psychometric properties of these assessments to provide initial evidence of the remote assessment and inform a larger study of remote assessment for concussions.

The rate of recruitment is a critical indicator of the success of a future large-scale study. Our moderate rate of recruitment indicates that there is sufficient interest among people with ABI to participate in a remote assessment-based study; however, additional strategies may be needed in order to increase our ability to reach potential participants. Although patient-participants showed a moderate willingness to participate, we did experience more difficulties recruiting males and recruiting people with other forms of ABI when compared to the recruitment of people with concussions. This was mainly because the practice of one of our assessors only included people with concussions (and not other forms of ABI). We experienced some difficulty recruiting people with known abnormality on specific tests, particularly those with abnormal coordination, as people with concussion typically have normal coordination and recruitment of people with concussion was easier due to the nature of the practice of our recruited assessors. The rate of patient recruitment appeared to be influenced by clinical status (concerns regarding aggravation of symptoms due to the nature of the measures and the need to complete the measures twice). Hunt et al [43] reported that stakeholder engagement in concussion research may be inhibited by injury-related factors, personal deterrents (vulnerability and fear), and environmental barriers. Concussion symptoms, including physical, cognitive, and emotional, were identified as a barrier to involvement by patient-participants [43].

The length of time required to complete our study procedures is another essential aspect of evaluating the feasibility of completing a large-scale study and participant burden. Factors influencing the time required to complete the study assessments included the participant’s ability to follow instructions, participant symptom aggravation, and technology issues (internet speed). The average time required to complete our remote assessment was longer than the in-person assessment, which is contrary to findings reported by Tran et al [44] who noted that remote visits tend to be similar in length when compared to in-person visits. However, adaptation of certain measures to remote environments, such as the VOMS, has been previously reported to take time and practice [45], which may have contributed to the additional time required to complete the remote assessment. While the duration of the remote assessment was longer than the in-person one, the reduced travel time required to attend an in-person visit highlights the convenience associated with remote assessments from the patient-perspective [46,47], although increased clinician time required may be a concern [14].

Additionally, having technical support with camera angles may have positively impacted the experience of the participants with the remote assessment in this feasibility study. A research team member was present to aid with moving the laptop to improve the visibility of the patient for the clinician, close blinds, and troubleshoot internet issues. Ownsworth et al [48] reported that ongoing access to support could improve user-friendliness and facilitate the use of remote care. Further, there is a need for reliable and high-quality videoconferencing technology, which could present a challenge in practice due to variable accessibility and cost [49]. To improve the issue associated with lighting, which was expressed by participants, it is recommended that blinds are closed when completing remote assessments and lights are bright enough for clinicians to observe the individual on screen but are manageable for the individual in terms of limiting symptoms aggravation. To improve visibility, a blank backdrop is recommended along with appropriate positioning of the camera at eye level [50].

Participants perceived the environmental setup of the remote assessment to be adequate. The feedback obtained highlighted the advantages of remote assessments over traditional in-person assessments, which is consistent with the literature, including eliminating the need to drive to the assessment center, and the ability to control the environment better at home, such as having the capacity to dim lights [46,51,52]. However, home can present challenges as well [53]. The remote assessments for this study were all conducted using the same laptop and within the same setting, which may have positively impacted the experiences of the participants. The home environment may present unique challenges, such as distractions and variable screen sizes, which in turn may impact outcomes on the assessment [54]. It is recommended to develop a plan and schedule for these assessments to minimize such distractions in the home setting [54]. Further, 1 participant in our study was homeless and was admitted as an inpatient, and therefore, would not have access to needed technology in a home environment.

When remote assessments are implemented in practice, it is important to ensure that findings obtained through the remote assessment are comparable to those obtained through in-person approaches [27]. For the most part, the clinician- and patient-participants in this feasibility study perceived that though findings were similar in the in-person and remote assessment. This perception of congruence is supported by objective data obtained by Vargas et al [55] who examined the feasibility of remote assessment of concussion using a telemedicine robot in which a neurologist remotely assessed injured athletes simultaneously to sideline provider in-person assessments. Vargas et al [55] provide preliminary information on the strong level of agreement (within 3 seconds or points 100% of the time) between findings documented by the in-person sideline provider and the remote neurologist on specific concussion measures (Standardized Assessment of Concussion [a cognitive test], modified Balance Error Scoring System [a balance test], and King-Devick test [a saccadic eye movement test]) [55]. While the perception of congruence was high, participants, both assessors and patients, were more often more confident in the findings of the in-person assessment when compared to the remote assessment. These findings are in line with Gilbert et al [56], who noted that clinicians and patients were satisfied with remote consultations; however, in-person consultations are still preferred (outside the COVID-19 pandemic).

In-person measures should have acceptable reliability properties for method-comparison studies to have meaningful results [27]. It is, therefore, recommended to determine the reliability properties of the remote assessment as part of the method-comparison study. Sensitivity metrics of the measures included in the remote assessment are also of interest, as clinical assessment findings are relied upon to detect deficits and guide intervention [2]. The preliminary investigation of reliability and sensitivity in this study appears to vary when compared to previously reported in-person values. Reliability values for in-person administration of the measures range from moderate (test-retest reliability of the single leg stance, with a κ of 0.43 [57]), to excellent (within-tester reliability of cervical spine range of motion with an ICC of 0.90 [58]). The sensitivity metrics associated with in-person administration of the measures range from moderate (0.45 for balance testing in people with traumatic brain injury [59]) to excellent (96% for the VOMS assessed in people with concussion [58]). The findings of this feasibility study indicate that the evaluation of finger-to-nose testing, saccades, and cervical spine range of motion appear to have poorer properties associated with remote administration when compared to in-person metrics, which have documented sensitivities of 71% [59], 64%‐77% [60], and 86%‐95% [61], respectively.

Consistent with the literature, the lack of physical presence in the remote assessment may have been associated with an inability to gain a full understanding of the patient’s status in these measures and subtle abnormalities such as those observed during oculomotor assessments may have been more difficult to capture through videoconferencing [62,63], which was also subjectively acknowledged by patient-participants in this feasibility study. Documentation of psychometric properties of measures administered in both in-person and remote contexts is required to support the hybrid approach to care, which integrates both in-person and remote interactions with clinicians. This approach is desired by both clinicians and people living with concussions [13,14].

The data obtained in this study suggest that there are specific measures, such as the evaluation of saccades, cervical spine range of motion, and finger-to-nose testing that are more difficult to administer remotely and, therefore, a high level of care in delivery should be considered to increase reliability and sensitivity. Further development and research are, therefore, needed in this area to determine how best to administer these measures remotely. This could include using advanced technologies, improving training of clinicians, or improving administration instructions [50], all of which will be considered for the large-scale study. Best practices for administering these measures need to be identified, and strategies to overcome limitations posed by the remote environment should be explored.

Strengths and Limitations

A strength of this study includes the use of technology and software that are commonly used to complete remote assessments at TOHRC. This decision was made to mimic as best as possible the current clinical approach to remote assessments. Further, we expanded the inclusion criteria to include participants with various forms of ABI, which ensured the inclusion of individuals with identifiable positive findings. This approach not only facilitated recruitment but also enhanced the generalizability of our findings.

Our study also has several limitations. First, this study was carried out in a controlled environment to standardize as many aspects of the remote assessment as possible. Thus, we did not test the remote assessment in real-world settings such as with people present in remote or rural regions, where factors such as home environments, technology used, internet, and so forth, may have influenced reliability. Second, due to an inability to complete the assessments more than twice, we were unable to establish the reliability properties of the in-person assessment and relied on the literature for the in-person properties needed for comparisons, which will also be required for the large-scale study. Third, it was not feasible for 2 different clinicians to conduct the initial in-person and remote assessments in our clinical environment due to scheduling constraints, so the same clinician completed both initial study assessments. In an attempt to mitigate potential bias and in line with best practices for method-comparison studies comparing in-person and remote approaches [27], we randomized the order of the study assessments and compared the in-person assessment conducted by rater A with the findings of the observed recording completed by rater B; however, this approach is limited in that rater differences may have impacted the findings. Finally, the widths of the CIs reported in this feasibility study are extremely wide (due to the small sample size) and, therefore, there is little conclusion one can draw from the psychometric property estimates. The small sample size further limits the generalizability of the study findings. More data and extensive study are needed to definitively establish the reliability and sensitivity properties associated with the remote concussion assessment.

Conclusions

For clinical measures to be confidently used by clinicians in remote care practice, comparisons need to be made to their in-person counterparts. Given the width of the CIs, little can be concluded regarding the sensitivity of the concussion measures administered remotely, when compared to in-person administration, and the reliability of those measures. However, this feasibility study documents the time needed to complete components of a concussion assessment remotely and confirms the probable safety of the assessment, with no adverse events specific to the remote assessment documented. The findings of this feasibility study provide a foundation and will inform processes for a planned follow-up study that contains an adequate sample size to estimate the psychometric properties with adequate precision. Future work should expand on this foundation through the exploration of the impacts of home environments on remote concussion assessment outcomes and through the investigation of additional relevant psychometric properties, such as responsiveness.

KB is supported by an admission scholarship from the University of Ottawa, the Ontario Graduate Scholarship, and the Early Professionals, Inspired Careers in AgeTech (EPIC-AT) Health Research Training Program fellowship through AGE-WELL. MK is supported by an admission scholarship, international doctoral scholarship, Special Merit Scholarship from the University of Ottawa, Hans K Uhthoff, MD FRCSC Graduate fellowship, Bank of Montreal Financial Group Graduate scholarship, and Ontario Physiotherapy Association Scholarship.

None declared.

AbbreviationsABI

acquired brain injury

ICC

intraclass correlation coefficient

REDCap

Research Electronic Data Capture

TOHRC

Ottawa Hospital Rehabilitation Centre

VOMS

Vestibular/Ocular Motor Screening

WSIB

Worker Safety and Insurance Board

ReferencesFallesenP CamposB Effect of concussion on salary and employment: a population-based event time study using a quasi-experimental designBMJ Open202010211010e03816110.1136/bmjopen-2020-03816133087373FawcettAL Principles of Assessment and Outcome Measurement for Occupational Therapists and Physiotherapists: Theory, Skills and Application2013John Wiley & SonsLeddyJJ HaiderMN NobleJM Clinical assessment of concussion and persistent post-concussive symptoms for neurologistsCurr Neurol Neurosci Rep2021112421121410.1007/s11910-021-01159-234817724ReschJE BrownCN SchmidtJ The sensitivity and specificity of clinical measures of sport concussion: three tests are better than oneBMJ Open Sport Exerc Med201621e00001210.1136/bmjsem-2015-00001227900145BroglioSP MacciocchiSN FerraraMS Sensitivity of the concussion assessment batteryNeurosurgery2007066061050105710.1227/01.NEU.0000255479.90999.C017538379KontosAP ElbinRJ TrbovichA Concussion clinical profiles screening (CP Screen) tool: preliminary evidence to inform a multidisciplinary approachNeurosurgery202008187234835610.1093/neuros/nyz54531950187DuPlessisD LamE XieL Multi-domain assessment of sports-related and military concussion recovery: a scoping reviewPhys Ther Sport2023015910311410.1016/j.ptsp.2022.11.01036528003Quatman-YatesCC Hunter-GiordanoA ShimamuraKK Physical therapy evaluation and treatment after concussion/mild traumatic brain injury: clinical practice guidelines linked to the International Classification of Functioning, Disability and Health from the Academy of Orthopaedic Physical Therapy, American Academy of Sports Physical Therapy, Academy of Neurologic Physical Therapy, and Academy of Pediatric Physical Therapy of the American Physical Therapy AssociationJ Orthop Sports Phys Ther202004504CPG1CPG7310.2519/jospt.2020.030132241234Register-MihalikJK GuskiewiczKM MihalikJP SchmidtJD KerrZY McCreaMA Reliable change, sensitivity, and specificity of a multidimensional concussion assessment battery: implications for caution in clinical practiceJ Head Trauma Rehabil201328427428310.1097/HTR.0b013e3182585d3722691965DiazG EasterlingK LuchtefeldT PtJS FlowersM McGlawnR Are clinical post-concussion tests reliable? A pilot study of test-retest reliability of selected post-concussion testsBiomed Sci Instrum202056199LangerL LevyC BayleyM Increasing incidence of concussion: true epidemic or better recognition?J Head Trauma Rehabil2020351E60E6610.1097/HTR.000000000000050331246881BeatonMD HadlyG BabulS Stakeholder recommendations to increase the accessibility of online health information for adults experiencing concussion symptomsFront Public Health2020855781410.3389/fpubh.2020.55781433505948van IersselJ O’NeilJ KingJ ZemekR SveistrupH Clinician perspectives on providing concussion assessment and management via telehealth: a mixed-methods studyJ Head Trauma Rehabil2023383E233E24310.1097/HTR.000000000000082736731011BarnesK SveistrupH KarimijashniM Barriers and facilitators associated with virtual concussion physical assessments from the perspectives of clinicians and people living with workplace concussionsJMIR PreprintsPreprint posted online on Sep 25, 202410.2196/preprints.56158Jansen-KosterinkS Dekker-van WeeringM van VelsenL Patient acceptance of a telemedicine service for rehabilitation care: a focus group studyInt J Med Inform201905125222910.1016/j.ijmedinf.2019.01.01130914177MadhavanS SivaramakrishnanA BowdenMG ChumblerNR Field-FoteEC KesarTM Commentary: remote assessments of gait and balance - implications for research during and beyond Covid-19Top Stroke Rehabil202201291748110.1080/10749357.2021.188664133596774WentinkM van Bodegom-VosL BrounsB How to improve eRehabilitation programs in stroke care? A focus group study to identify requirements of end-usersBMC Med Inform Decis Mak2019072619114510.1186/s12911-019-0871-331349824O’NeilJ BarnesK Morgan DonnellyE SheehyL SveistrupH Identification and description of telerehabilitation assessments for individuals with neurological conditions: a scoping reviewDigit Health202392055207623118323310.1177/2055207623118323337377560Palacín-MarínF Esteban-MorenoB OleaN Herrera-ViedmaE Arroyo-MoralesM Agreement between telerehabilitation and face-to-face clinical outcome assessments for low back pain in primary careSpine (Phila Pa 1976)20130515381194795210.1097/BRS.0b013e318281a36c23238489CabanaF BoissyP TousignantM MoffetH CorriveauH DumaisR Interrater agreement between telerehabilitation and face-to-face clinical outcome measurements for total knee arthroplastyTelemed e-Health20100416329329810.1089/tmj.2009.0106RussellTG HoffmannTC NelsonM ThompsonL VincentA Internet-based physical assessment of people with Parkinson disease is accurate and reliable: a pilot studyJ Rehabil Res Dev201350564365010.1682/jrrd.2012.08.014824013912BarnesK SveistrupH BayleyM Identification of clinical measures to use in a virtual concussion assessment: protocol for a mixed methods studyJMIR Res Protoc202212221112e4044610.2196/4044636548031BarnesK SveistrupH BayleyM Clinician-prioritized measures to use in a virtual concussion assessment: a mixed-methods studyJMIR PreprintsPreprint posted online on May 11, 202310.2196/preprints.47246KirshnerB GuyattG A methodological framework for assessing health indicesJ Chronic Dis1985381273610.1016/0021-9681(85)90005-03972947TrevethanR Sensitivity, specificity, and predictive values: foundations, pliabilities, and pitfalls in research and practiceFront Public Health2017530710.3389/fpubh.2017.0030729209603ArainM CampbellMJ CooperCL LancasterGA What is a pilot or feasibility study? A review of current practice and editorial policyBMC Med Res Methodol20100716101710.1186/1471-2288-10-6720637084RussellTG Martin-KhanM KhanA WadeV Method-comparison studies in telehealth: study design and analysis considerationsJ Telemed Telecare20171023979780210.1177/1357633X1772777228893117GoldmanL SiddiquiEM KhanA Understanding acquired brain injury: a reviewBiomedicines2022092109216710.3390/biomedicines1009216736140268GreenbergMS WoodNE SpringJD Pilot study of neurological soft signs and depressive and postconcussive symptoms during recovery from mild traumatic brain injury (mTBI)J Neuropsychiatry Clin Neurosci201527319920510.1176/appi.neuropsych.1405011126222967MasanicCA BayleyMT Interrater reliability of neurologic soft signs in an acquired brain injury populationArch Phys Med Rehabil19980779781181510.1016/s0003-9993(98)90361-69685096JohnstonS LeddyJ ReedN Virtual concussion exam training manualPediatric Concussion Care20222024-11-21https://pedsconcussion.com/vce-manual/ManiS SharmaS SinghDK Concurrent validity and reliability of telerehabilitation-based physiotherapy assessment of cervical spine in adults with non-specific neck painJ Telemed Telecare202102272889710.1177/1357633X1986180231272309NishidaK UsamiT MatsumotoN NishikimiM TakahashiK MatsuiS The finger-to-nose test improved diagnosis of cerebrovascular events in patients presenting with isolated dizziness in the emergency departmentNagoya J Med Sci20220884362162910.18999/nagjms.84.3.62136237881TermsarasabP ThammongkolchaiT RuckerJC FruchtSJ The diagnostic value of saccades in movement disorder patients: a practical guide and reviewJ Clin Mov Disord2015101521410.1186/s40734-015-0025-4MuchaA CollinsMW ElbinRJ A Brief Vestibular/Ocular Motor Screening (VOMS) assessment to evaluate concussions: preliminary findingsAm J Sports Med20141042102479248610.1177/036354651454377525106780Thoomes-de GraafM ThoomesE Fernández-de-Las-PeñasC Plaza-ManzanoG ClelandJA Normative values of cervical range of motion for both children and adults: a systematic reviewMusculoskelet Sci Pract2020104910218210.1016/j.msksp.2020.10218232861355KvålsethTO Note on Cohen’s KappaPsychol Rep19890865122322610.2466/pr0.1989.65.1.223HazraA Using the confidence interval confidentlyJ Thorac Dis2017109104125413010.21037/jtd.2017.09.1429268424From an observed sample: estimates of population prevalence, sensitivity, specificity, predictive values, and likelihood ratiosClinical Calculator 120242024-10-28http://www.vassarstats.net/clin1.htmlChanC IversonGL PurtzkiJ Safety of active rehabilitation for persistent symptoms after pediatric sport-related concussion: a randomized controlled trialArch Phys Med Rehabil20180299224224910.1016/j.apmr.2017.09.10828989074BarnesK SveistrupH BayleyM Reliability and sensitivity of a virtual assessment developed for workplace concussions: protocol for a method-comparison studyJMIR Res Protoc2024072613e5766310.2196/5766339059009MontesJ EichingerKJ PasternakA YochaiC KrosschellKJ A post pandemic roadmap toward remote assessment for neuromuscular disorders: limitations and opportunitiesOrphanet J Rare Dis2022014171510.1186/s13023-021-02165-w34983609HuntC de Saint-RomeM Di SalleC MichalakA WilcockR BakerA Mapping stakeholder perspectives on engagement in concussion research to theoryCan J Neurol Sci20200347220220910.1017/cjn.2019.32431967536TranV LamMK AmonKL Interdisciplinary eHealth for the care of people living with traumatic brain injury: a systematic reviewBrain Inj20173113-141701171010.1080/02699052.2017.138793229064300Caze IIT KnellGP AbtJ BurkhartSO Management and treatment of concussions via tele-concussion in a pediatric setting: methodological approach and descriptive analysisJMIR Pediatr Parent202032e1992410.2196/19924AlmathamiHKY WinKT Vlahu-GjorgievskaE Barriers and facilitators that influence telemedicine-based, real-time, online consultation at patients’ homes: systematic literature reviewJ Med Internet Res20200220222e1640710.2196/1640732130131Hale-GallardoJL KreiderCM JiaH Telerehabilitation for rural veterans: a qualitative assessment of barriers and facilitators to implementationJ Multidiscip Healthc20201355957010.2147/JMDH.S24726732669850OwnsworthT TheodorosD CahillL Perceived usability and acceptability of videoconferencing for delivering community-based rehabilitation to individuals with acquired brain injury: a qualitative investigationJ Int Neuropsychol Soc202001261475710.1017/S135561771900078X31983367EllisMJ RussellK The potential of telemedicine to improve pediatric concussion care in rural and remote communities in CanadaFront Neurol20191084010.3389/fneur.2019.0084031428043AnvariS NeumarkS JangraR SandreA PasumarthiK XenodemetropoulosT Best practices for the provision of virtual care: a systematic review of current guidelinesTelemed e-Health202301129132210.1089/tmj.2022.0004DorseyER GliddenAM HollowayMR BirbeckGL SchwammLH Teleneurology and mobile technologies: the future of neurological careNat Rev Neurol20180514528529710.1038/nrneurol.2018.3129623949TurkstraLS Quinn-PadronM JohnsonJE WorkingerMS AntoniottiN In-person versus telehealth assessment of discourse ability in adults with traumatic brain injuryJ Head Trauma Rehabil201227642443210.1097/HTR.0b013e31823346fc22190010O’NeilJ EganM MarshallS BilodeauM PelletierL SveistrupH Remotely supervised exercise programmes to improve balance, mobility, and activity among people with moderate to severe traumatic brain injury: description and feasibilityPhysiother Can202305175214615510.3138/ptc-2021-0039LuxtonDD PruittLD OsenbachJE Best practices for remote psychological assessment via telehealth technologiesProf Psychol Res Pract2014451273510.1037/a0034547VargasBB ShepardM HentzJG KutyreffC HersheyLG StarlingAJ Feasibility and accuracy of teleconcussion for acute evaluation of suspected concussionNeurology (ECronicon)2017041888161580158310.1212/WNL.0000000000003841GilbertAW BillanyJCT AdamR Rapid implementation of virtual clinics due to COVID-19: report and early evaluation of a quality improvement initiativeBMJ Open Qual20200592e00098510.1136/bmjoq-2020-00098532439740VartiainenMV RinneMB LehtoTM PasanenME SarajuuriJM AlarantaHT The test–retest reliability of motor performance measures after traumatic brain injuryAdv Physiother20060182505910.1080/14038190600700195YoudasJW CareyJR GarrettTR Reliability of measurements of cervical spine range of motion—comparison of three methodsPhys Ther19910217129810410.1093/ptj/71.2.98BaracksJ CasaDJ CovassinT Acute sport-related concussion screening for collegiate athletes using an instrumented balance assessmentJ Athl Train20180653659760510.4085/1062-6050-174-1729897278HunfalvayM RobertsCM MurrayN TyagiA KellyH BolteT Horizontal and vertical self-paced saccades as a diagnostic marker of traumatic brain injuryConcussion2019072541CNC6010.2217/cnc-2019-000131467684Dall’AlbaPT SterlingMM TreleavenJM EdwardsSL JullGA Cervical range of motion discriminates between asymptomatic persons and those with whiplashSpine (Phila Pa 1976)200110126192090209410.1097/00007632-200110010-0000911698884EllisMJ BolesS DerksenV Evaluation of a pilot paediatric concussion telemedicine programme for northern communities in ManitobaInt J Circumpolar Health201912781157316310.1080/22423982.2019.157316330714513HillS BarrC KillingtonM McLoughlinJ DanielsR Den BergM MaederAJ HigaC BergM GoughC The design and development of MOVE-IT: a system for remote vestibular and oculomotor assessment in people with concussionTelehealth Innovations in Remote Healthcare Services Delivery2021277IOS PRESS273610.3233/SHTI210025