0% found this document useful (0 votes)
64 views12 pages

Effects of Speech Phonological Features During Passive Perception On Cortical Auditory Evoked Potential in Sensorineural Hearing Loss

1. The document investigates how speech phonological features affect the amplitude and latency of cortical auditory evoked potential (CAEP) components in individuals with sensorineural hearing loss (SNHL). 2. It found that CAEP responses to voicing contrasts had higher amplitude and longer latency than responses to place of articulation contrasts. Subjects with SNHL also showed higher amplitude and prolonged latencies in most CAEP components for both speech stimuli. 3. The existence of different spectral and temporal acoustic cues in the speech stimuli was reflected in the strength and timing of the CAEP responses. The study aims to help audiologists better understand the auditory processing difficulties of individuals with hearing loss.

Uploaded by

nuramalia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views12 pages

Effects of Speech Phonological Features During Passive Perception On Cortical Auditory Evoked Potential in Sensorineural Hearing Loss

1. The document investigates how speech phonological features affect the amplitude and latency of cortical auditory evoked potential (CAEP) components in individuals with sensorineural hearing loss (SNHL). 2. It found that CAEP responses to voicing contrasts had higher amplitude and longer latency than responses to place of articulation contrasts. Subjects with SNHL also showed higher amplitude and prolonged latencies in most CAEP components for both speech stimuli. 3. The existence of different spectral and temporal acoustic cues in the speech stimuli was reflected in the strength and timing of the CAEP responses. The study aims to help audiologists better understand the auditory processing difficulties of individuals with hearing loss.

Uploaded by

nuramalia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Sains Malaysiana 46(12)(2017): 2477–2488

[Link]

Effects of Speech Phonological Features during Passive Perception on Cortical


Auditory Evoked Potential in Sensorineural Hearing Loss
(Kesan Ciri Fonologi Pertuturan semasa Persepsi Pasif pada Korteks Auditori Rangsang
Potensi dalam Kehilangan Pendengaran Sensorineural)

HUA NONG TING*, ABDUL RAUF A BAKAR, JAYASREE SANTHOSH, MOHAMMED G. AL-ZIDI,
IBRAHIM AMER IBRAHIM & NG SIEW CHEOK

ABSTRACT
The deficiency in the human auditory system of individuals suffering from sensorineural hearing loss ( SNHL ) is
known to be associated with the difficulty in detecting of various speech phonological features that are frequently
related to speech perception. This study investigated the effects of speech articulation features on the amplitude
and latency of cortical auditory evoked potential ( CAEP) components. The speech articulation features included
the placing contrast and voicing contrast. 12 Malay subjects with normal hearing and 12 Malay subjects with
SNHL were recruited for the study. The CAEPs response recorded at higher amplitude with longer latency when
stimulated by voicing contrast cues compared to that of the placing contrast. Subjects with SNHL elicited greater
amplitude with prolonged latencies in the majority of the CAEP components in both speech stimuli. The existence
of different frequency spectral and time-varying acoustic cues of the speech stimuli was reflected by the CAEPs
response strength and timing. We anticipate that the CAEPs responses could equip audiologist and clinicians with
useful knowledge, concerning the potential deprivation experience by hearing impaired individuals, in auditory
passive perception. This would help to determine what type of speech stimuli that might be useful in measuring
speech perception abilities, especially in Malay Malaysian ethic group, for choosing a better rehabilitation
program, since no such study conducted for evaluating speech perception among Malaysian clinical population.

Keywords: Consonant-vowel (CV); cortical auditory evoked potential ( CAEP); electroencephalography (EEG);
mismatch negativity (MMN); sensorineural hearing loss (SNHL)
ABSTRAK
Kekurangan dalam sistem auditori manusia terhadap individu yang mengalami kehilangan pendengaran sensorineural
(SNHL) diketahui melalui kesukaran dalam mengesan pelbagai ciri ucapan fonologi yang sering berkait-rapat dengan
persepsi pertuturan. Kajian ini mengetengahkan kesan ucapan artikulasi terhadap amplitud dan kependaman pada
komponen potensi terbangkit auditori kortikal (CAEP). Ciri ucapan artikulasi termasuk kontras perletakan dan kontras
suara. Seramai 12 individu normal tahap pendengaran dan 12 individu yang memiliki SNHL telah direkrut untuk kajian
ini. Tindak balas CAEP terhadap isyarat kontras suara direkodkan pada amplitud lebih tinggi serta kependaman lebih
lama berbanding isyarat kontras perletakkan. Individu yang memiliki SNHL menghasilkan amplitud lebih tinggi berserta
kependaman lebih panjang dalam kebanyakan komponen CAEP s dan ini meliputi kedua- dua rangsangan ucapan.
Kewujudan perbezaan spektrum frekuensi dan beza-masa isyarat akustik pada rangsangan ucapan dicerminkan oleh
kekuatan tindak balas dan tempoh masa CAEPs. Kami menjangkakan bahawa tindak balas CAEPs dapat menyediakan
pengetahuan yang berguna kepada pakar audiologi dan doktor dalam memahami pengurangan potensi yang dihidapi
oleh individu persepsi auditori terjejas. Ini dapat membantu untuk menentukan apa jenis ransangan ucapan yang
bersesuaian dalam menilai keupayaan persepsi ucapan, terutamanya dalam kalangan etnik Melayu di Malaysia
seterusnya memilih program pemulihan yang lebih baik, kerana tidak ada kajian seumpama ini yang pernah dijalanlan
untuk menilai persepsi ucapan dalam kalangan penduduk klinikal Malaysia.

Kata kunci: Elektroensefalografi (EEG); hilang saraf deria pendengaran ( SNHL); konsonan-vokal (CV); korteks
auditori rangsang potensi (CAEP); kenegatifan tak padan (MMN)

INTRODUCTION voiceless distinction, place and manner of articulation provide


Accurate speech perception within features of speech a crucial complexity mapping mechanism which creates a
articulation through spoken language is essential for stable neuronal representation in the human auditory system
human to communicate during social interactions. The (Anderson et al. 2013; Korczak & Stapells 2010). The
speech acoustic phonological features such as voice/ significant of speech perception for passive
2478 condition is evident in peoples suffering from
sensorineural hearing loss (SNHL ), whose impaired
speech articulatory selectivity and discrimination
contribute to the difficulty in understanding speech
(Boothroyd 1993; Siti Zamratol-Mai Sarah et al. 2016; (2014). They justified the efficacy of CAEPs response in
Oates et al. 2002; Wunderlich & Cone-Wesson 2001). evaluating dysfunctionality of the brain’s early auditory
To date, degraded speech perception among people processing system in subjects with misophonia. The
with SNHL is still remains unclear. Several investigators finding showed the diminished N1 component to oddball
used cortical auditory evoked potential ( CAEP) as a stimuli, thus suggested an underlying neurobiological
diagnostic tool to investigate how the brain processes deficit in misophonia patient.
phonologic features in speech signal. Components of CAEP The mismatch negativity (MMN) response evokes when
constituting the neuronal linguistic processes are associated a constant train of identical stimuli with ‘new’ afferent
with words and sentences, depends entirely on the acoustic infrequent mismatching stimuli was presented to an
continuum so as to discriminate with the desired neural individual’s auditory system. This response processes
pattern of perception. Generally, measurements of CAEPs automatically when incoming stimuli is perceived to a
response strength (amplitude) and timing window (latency) sensory memory trace of preceding stimuli which is not
can provide objective information in terms of auditory only sensitive to task- relevant condition, but also when the
processing underlie speech perception in normal as well as subjects merely ignore the stimulus stream for different task
in difficult-to-test patients (Arsenault & Buchsbaum as in passive listening condition (Luck 2005; Näätänen
2015; Pratt et al. 2009; Schröder et al. 2014). Previous 1995; Steinhauer 2014). The mainstream interpretation
study elucidated that CAEPs testing could provide of MMN usage in clinical application begun in the late 1990s
productive responses in assessing auditory pathway without when it provided a potential means for measuring possible
requiring cooperation (passive) from the subjects (Agung et auditory perception and sensory-memory anomalies
al. 2006). (Näätänen et al. 1993). Previous researchers concluded
Former studies used speech sound varied along the that the human auditory system elicited greater brain
acoustic continuum of voice-onset-time (VOT) (Tremblay et response towards speech CV stimuli compared to tonal
al. 2003) and frequency spectral (Korczak & Stapells 2010) stimuli as reflected in higher MMN and P3a amplitude
and reported different effects on CAEP components values (Jaramillo et al. 2001; Tavabi et al. 2009). Former
amplitude and latency. The present study extended the studies also proposed that the enlargement of MMN
previous works by evaluating the amplitude and latency of amplitudes in native speakers with two non-native speaker
CAEP components in SNHL subjects using Malay complex groups indicates the activation of native-language phonetic
sound, i.e. CV tokens that differ in terms of their features of prototypes (Picton et al. 1995; Ylinen et al. 2006). As
speech articulation. Earlier studies done by Wunderlich et per objective, the current study only included the native
al. (2001) used CV tokens /bae/ vs /dae/ and tonal stimuli to Malaysian Malay ethnic groups where Malay CV speech
demonstrate the parallel effects found on the N1 and P2 tokens were presented and we hypothesized that the MMN
amplitudes when both decreased in values as frequency will be elicited due to the present of language memory
increased. They concluded that there was close relationship trace (Näätänen 2001; Ylinen et al. 2006).
between N1 and MMN and both reflected the The major aim of the current study was to employ the
tonotopicity of the auditory cortex. Oates et al. (2002) had effects of CAEP components as a measure of voice/
highlighted the attenuation effects of CAEPs components on voiceless distinction against place of articulation involving
subjects with SNHL when they received the speech stimuli. CV stimuli during passive listening between healthy normal
Prolonged latency on late components (N2, P3) compared and individuals suffering from SNHL. Collectively, these
to earlier components (N1, MMN) was experienced by the electrophysiological measures may be well explained on
subjects with SNHL. This indicated that the latency the differences happened during preconscious speech
parameters were more sensitive towards evaluating processes at higher levels in the brain, besides showing the
decreased audibility compared to the response strength. For direct relationship between the acoustic signal and the
these reasons, morphology of CAEPs is thought to reflect perceived phoneme (Abbs & Sussman 1971; Stapells
the functional integrity of human auditory pathways that 2002). We hypothesized that since these two sounds are
depends with phonologic features of speech in performing phonetically and spectrally distinct, they may evoke CAEPs
speech perception. with different morphological responses and might provide
(Tremblay et al. 2003) discovered the implication of us information on how auditory pathway perform
VOT on CVs tokens involving voice/voiceless phonemes. discriminant mechanism during passive perception between
They presented speech tokens at 10 ms increments along each of these different speech sounds since the goal is to
a /ba/-/pa/ VOT continuum to young and older adults. They apply in everyday life.
found that N1 and P2 latencies were prolonged with VOT For the last six years, depth exploration was done
durations. Difficulty was found in discriminating longer by Korczak and Stapells (2010) to understand the effects
time-varying acoustic cues in speech language. More of three articulatory features of speech including vowel-
clinical application was carried out by Schröder et al. space contrast, place of articulation and voice/voiceless
discrimination on normal subjects. They reported that
the brain may have a difficult task in discriminating
consonant stimuli as compared to vowel stimuli due to
rapidly transition of formant frequencies. Thus, recent
development extends the core idea to create more 2479
beneficial direction on the significance of CAEP components in
discriminating various speech of articulation especially in (SPL) to accommodate both degree of SNHL (Korczak &
individuals suffering from SNHL for better knowledge Stapells 2010). The natural digitizing speech tokens
regarding electrophysiological correlates of speech perception. were produced by a female Malaysian Malay native
To date, there is no study to evaluate the passive neuronal speaker and the speech tokens were recorded at 44,100
activation involving SNHL population between the Hz sampling frequency. The tokens were edited into
phonological features of speech sound. The study focuses on 250 ms in duration by removing the initial vibration of
the CAEP data for Malaysian population since no such study is the vocal cord portion, the end part of the steady state
available for evaluating speech perception among Malaysians. vowel and windowing the offset.
Therefore, the aims of the present study were to investigate The CVs stimuli were played at pseudorandomized
whether, CAEP components show different pattern in response oddball paradigm, which consisted of standard stimuli
to latencies and amplitudes between speech stimuli; and MMN having 0.8 occurrence probability and the deviant stimuli
was appear and elicited in response to Malay CV stimuli to having 0.2 occurrences. For each set of speech stimuli, both
disclose any neuropathological changes in the auditory CVs sounds were presented as standards and deviants in

pathway. separate runs with onset inter-stimulus intervals ( ISI) of 800


± 200 ms duration. This randomized slight interval reduces
METHODS the temporal prediction probability of the incoming
auditory stimulus for both standard and deviant stimuli.
They were delivered via Sennheiser HD 428 closed
PARTICIPANTS circumaural headphones to both ears and were calibrated at
The study involved two groups of subjects: First, 12 right- ear level using CR: 160 series Cirrus Optimus red sound
handed Malaysian male adult subjects (fluent Malay- level meter to obtain the desired SPL level (Anderson et al.
speakers) between 20 and 45 years of age (mean 2013). The study was done in passive listening condition
age=32.2 year, SD=6.9 years) having bilateral SNHL for and tested for two runs with a few minutes of rest between
more than 6 months and second 12 right-handed Malaysian runs. In each run, the speech CVs tokens consisted of 400
male adult subjects (fluent Malay-speakers) between 20 total stimuli; i.e. 320 standard stimuli and 80 deviant
and 45 years of age (mean age=28.7 year, SD=5.4 years) stimuli, in such conditions that 2-6 standard stimuli were
with normal hearing sensitivity which served as the control presented between each deviant stimulus. Thus, each
group. Normal hearing participants recruited were healthy stimulus contrasts yielding a total of 800 stimuli containing
subjects with no past history of otological, psychological or 160 deviant stimuli and 640 standard stimuli replicated
neurological complications and without any speech or hearing over the two runs. Counterbalanced paradigm was
disorders. All participants involved in this study were tested at implemented in this study where one token acted once as a
the Department of Otorhinolaryngology ( ENT), University deviant in one run and once as standard in another run for
Malaya Medical Centre ( UMMC) using the routine pure tone each set of CVs stimuli respectively (Duncan et al. 2009;
audiometry (PTA) measurement. Written informed consent was Korczak & Stapells 2010). Figure 1 shows the spectrogram
obtained from all participants. Medical ethics clearance was of three associated CV tokens that were used in the present
approved by the Medical Ethics committee, University Malaya
study.
Medical Centre (Reference No. 1045.22). Subjects with
normal hearing (NH) showed normal pure-tone audiological ELECTROPHYSIOLOGICAL CORTICAL AUDITORY EVOKED
presentation of 15 dB hearing level (HL) or better between POTENTIAL (CAEP) RECORDINGS
250 and 8000 Hz for both ears. The subjects with SNHL The electroencephalography (EEG) activity of the CAEPs was
suffered from mild to moderate hearing loss level bilaterally recorded at a sampling frequency of 500 Hz from eight
based on the average of their 500 to 2000 Hz pure-tone EEG electrode channels with a wireless EEG device (EnoBio,
thresholds (PTA≥35 dB HL and <74 dB HL). To evaluate the Neuroelectrics, Spain) (Ruffini et al. 2007, 2006).
cognitive state, attention, mental and memory capabilities as Ag/AgCl electrodes were mounted on a Neoprene EEG cap.
well as language deprivation of selected participants, a simple Electrodes were placed at the positions of F Z, CZ, PZ, C4, T4, C3,
Mini Mental State Examination ( MMSE) was conducted before T3 and FPZ. The active Common Mode Sense ( CMS) electrode
the recording session (Ali et al. 2013; Folstein et al.
and passive Driven Right Leg ( DRL) electrode were served as
1975)
reference and ground respectively where both were connected
to two electrodes located at the right mastoid. A standard
STIMULUS PRESENTATION
computer equipped with Neuroelectrics NIC 1.3 software (EEG
The study selected two sets of speech articulation features data collection) and MATLAB R2013a (stimulus presentation
which are voiced/voiceless distinction ba versus da (/ and analysis) software were designed and used for the
ba/-/da/) and place of articulation features ba versus pa electroencephalographic data collection and post-processing
(/ba/-/pa/). These consonant-vowel (CV) speech stimuli analysis. All the participants were instructed to sit quietly and
respectively were presented at 80 dB sound pressure level comfortably in a sound proof chamber. Prior to
electrophysiological recording, all the subjects were asked to
reduce artifacts of the eye blinks
2480

FIGURE 1. Comparison of spectrogram for the three CV syllables. /ba/: 292 Hz (F0), 740 Hz (F1), 1481 Hz
(F2), 3228Hz (F3), 4310 Hz (F4); /da/: 291 Hz (F0), 759 Hz (F1), 1795 Hz (F2), 3155 Hz (F3), 4254 Hz
(F4); /pa/: 344 Hz (F0), 928 Hz (F1), 1523 Hz (F2), 3478 Hz (F3), 4437 Hz (F4)

and muscle movement. Since the study involved passive baseline level; and Inspected quantitatively in comparison with
listening conditions, all volunteers were informed to ignore the previous findings, and to be considered present if an
the incoming auditory stimulus, stay awake and focused on individual’s CAEP components had maximum correlation
the Malay reading material that was provided to them. Each coefficient (r) and significant value (P<0.05). The N1 and
N2 components were identified as the most negative
set of experiments was done in about 30 min per run.
peak occurred between 80-150 ms and 180-250 ms,
CORTICAL AUDITORY EVOKED POTENTIAL (CAEP) respectively, immediately after the stimulus onset. P1 and P2
WAVEFORMS MEASUREMENT AND DATA ANALYSIS were defined as the most positive deflection happened
After completion of the CAEP data recording, the evoked between 55-80 ms and 145-180 ms post-stimulus onset,
response was pre-processing offline to remove artifacts, respectively. P3 was scored between 220-380 ms, illustrated
correct baseline drift and filter the power supply. These by the most positive peak appeared after the stimulus onset
processes were done using a notch filter at 50 Hz and a within this response window. The difference of the CAEP
Butterworth band-pass filter of 1-45 Hz. For each set of waveform for each set of speech stimulus was used to measure
experiments, the two successive runs of each group’s standard the MMN response, which was obtained by subtracting the
and deviant stimuli were averaged. Due to the implementation averaged responses of the deviant stimuli from the averaged
of the counterbalance paradigm in this study, the evoked responses of the same stimuli presented as the standard. MMN
response obtained from the counterbalance standard and was defined as a component having the largest
deviant stimuli were summed and averaged with the previous negativity occurring between 100-250 ms at electrode
session. Finally, each set of stimulus presentation will produce positions Fz or Cz (Duncan et al. 2009; Li et al. 2016). The
evoked responses which were classified as ‘standard’ and appearance of MMN was confirmed when it had more
‘deviant’ and grouped separately. The standard average negative amplitudes at the fronto-central electrode site (Fz and
response which appeared immediately after the deviant Cz) in comparison with the parietal site (Pz). The amplitude of
stimulus was excluded from the analysis. evoked responses was compared with the prestimulus baseline
The rules used to justify the existence or absence of a and measured as the greatest amplitude recorded followed by
response to the passive condition were as follows: CAEP the latency measurements taken at the center of the peak
was inspected visually by two raters and considered to be obtained within the respective response window (Duncan et al.
present if an individual CAEP peak (e.g. P1, N1, P2, N2, 2009; Oates et al. 2002). The late CAEP amplitude and latency
components
MMN and P3) had higher amplitude than the pre-stimulus
were recorded from each individual response windows to
develop the grand-mean-waveforms for each CVs stimulus
(i.e. /ba/-/da/ and /ba/-/pa/) respectively as per the two 2481
types of experimental groups (i.e. normal hearing ( NH) and
sensorineural hearing loss (SNHL)). The amplitudes and factors highlighted in the ANOVAs were as follows;
latencies of various CAEP components were then assessed speech stimuli /ba/-/da/ and speech stimulus /ba/-/pa/.
independently between hearing impaired subjects ( SNHL) The main effects and interaction with the CAEPs
and controls as ‘standard’ and ‘deviant’. component were considered significant if p<0.05.

STATISTICAL ANALYSIS RESULTS


The independent t-test initially was done on the age between
the participants in order to make sure that no other factors Figure 2 demonstrates a sample of individual control group
contrast responses towards both sets of speech stimuli. This
might contribute to the main finding of the study. The
control subject’s response explained the common trends
CAEPs were analyzed using descriptive statistical
found in the average CAEPs waveform. Typically, as
measurement, correlation coefficient test and the final
referred to Figure 2, CAEP mean amplitudes exhibit higher
responses were analyzed using two-way repeated measures
activation in the deviant stimulus compared to the standard
analysis of variance (ANOVAs) technique. These statistical
stimulus for both speech conditions. The CAEP latencies for
analysis tests were conducted using IBM SPSS Statistics 23.0 the placing contrast are substantially shorter and produced
software. The study involved two stages of correlation lower activation than the voicing contrasts.
coefficient test. Initially, each of individual’s responses Figure 3 shows that there was a trend for both
was compared with the previous study and the standard speech stimuli recorded from patient’s CAEPs response
typical CAEP waveform (Duncan et al. 2009; Näätänen as well in difference waveforms. Apparently, the deviant
1992; Sams et al. 1985). Individual response having a stimulus demonstrated higher amplitude and shorter
maximum positive correlation coefficient (r2=0.825) with latency compared to the standard type of the placing
the standard waveform was then selected as the typical contrast stimulus. However, voicing contrast stimulus
standard trend CAEP waveform for the present study. Second showed the opposite pattern of response. As for the MMN
stages were done between the rest of the individual subject’s waveform (difference), higher negativity response and
responses with the current typical waveform. Individual’s longer latency were presented from the feedback
responses having maximum correlation and low significant towards voicing contrast compared to that of the placing
value (P<0.05) were accepted and those having low contrast stimulus. Figure 4 shows the average effects of
correlation and high P value (p>0.05) were neglected for both articulatory features of speech on the amplitudes of
further analysis. The CAEPs amplitudes and latencies were the CAEPs waveform, which was recorded at Cz
then analyzed individually using two-way repeated measures electrode. The results of CAEPs revealed that all average
ANOVA to identify the correlation dependence of the CAEPs deviant responses produced higher amplitudes activation
components on each set of speech stimuli and their relationship in accordance with both sets of speech stimulus.
between both phonologic features. The two However, this evidence was not found in N1 component
which corresponded to /ba/-/pa/ stimulus. P1 amplitude
of SNHL subjects was larger in response to voicing
contrast stimulus compared to that of the control subject.
P2 component showed a reverse pattern of response.

FIGURE [Link] CAEPs waveforms (Cz) recorded from a control subject. The top row waveforms are CAEP waveforms for the
averaged standard and deviant stimulus, which response to the placing contrast stimuli (/ba/-/da/). The bottom row waveforms are
CAEP waveforms for the averaged standard and deviant stimulus, which response to the voicing contrast stimuli (/ba/-/pa/)
2482

FIGURE [Link] CAEP s waveforms recoded from a SNHL subject. The top row illustrates the CAEP waveform response to the
placing contrast stimuli and the difference waveform recorded at Cz. The bottom row is CAEP waveform response to the voicing
contrast stimuli and difference waveform recorded at Cz

FIGURE 4. Mean and SD amplitude for both control and SNHL groups across two types of speech stimuli

Table 1 shows the amplitudes and latencies of the P3 components in response to both speech stimuli. SNHL
recorded CAEPs waveform at Cz electrode for both subject group showed a delayed and greater response towards
groups. P1 component experienced shorter timing response voicing contrast stimulus. The deviant response on each
in /ba/-/pa/ stimulus at both groups study. The same set of stimuli showed contradictory trend when it
response was shown by P2 component in /ba/-/da/ stimulus. showed higher latency and greater amplitude during
Both N1 and N2 components having similarity when both voicing contrast stimulus but happened at earlier
evoked longer average latencies in response to voicing response timing with the same pattern of neuron
contrast compared to the placing contrast stimulus. Figure activation in placing contrast stimulus, and these were
5 shows the individual’s data distribution of CAEP true for both SNHL and normal hearing groups.
2483

TABLE 1. Mean and SD latencies of CAEPs components and MMN for control and SNHL groups

/ba/-/da/ stimuli /ba/-/pa/ stimuli


Control SNHL Control SNHL
(N=12) (N=12) (N=12) (N=12)
Mean 71.06 74.78 Mean 68.56 69.65
P1 lat Stand SD 9.38 8.12 Stand SD 7.75 5.94
(ms) Mean 77.50 78.29 Mean 68.05 70.33
Dev Dev
SD 8.98 5.11 SD 6.52 5.55
Mean 108.95 119.28 Mean 114.40 129.44
N1 lat Stand SD 13.32 8.15 Stand SD 10.65 10.61
(ms) Mean 111.85 126.20 Mean 122.75 134.50
Dev Dev
SD 10.35 12.10 SD 14.52 6.73
Mean 165.75 160.61 Mean 166.30 170.55
P2 lat Stand SD 6.47 6.88 Stand SD 8.11 8.13
(ms) Mean 164.20 168.06 Mean 169.90 167.33
Dev Dev
SD 7.71 5.18 SD 10.13 8.16
Mean 204.30 232.11 Mean 220.15 240.65
N2 lat Stand SD 19.26 12.77 Stand SD 11.25 15.52
(ms) Mean 217.85 234.89 Mean 226.05 240.00
Dev Dev
SD 8.27 10.45 SD 14.29 6.22
Mean 315.35 345.00 Mean 317.00 360.65
P3 lat Stand SD 19.52 16.90 Stand SD 12.13 18.57
(ms) Mean 321.85 348.35 Mean 324.3 368.95
Dev Dev
SD 17.87 17.48 SD 8.20 21.73
MMN Mean 187.30 201.23 Peak Mean 190.91 215.74
Peak lat
(ms) SD 25.76 22.46 lat SD 23.14 22.06

FIGURE 5. Amplitude and latency plots for P3 component for different types of stimuli and subject
groups. Scatterplots demonstrate individual amplitude and latency values for P3 component

The result of two -way repeated measures ANOVAs ANOVAs outlined a significant effect for /ba/-/da/ stimuli on
showed that there are no significant differences in the P1, N1 and MMN amplitudes. The results of the ANOVAs
average CAEPs amplitude and latency between the response reported that a significant main effect was found on the
elicited by the standard and deviant stimuli. Evidence P1 amplitude in response to the placing contrast stimulus
found in both experimental groups with both sets of between groups (p<0.001). No significant difference was
stimuli; therefore, both types of stimuli were averaged found for P1 component in response to voicing contrast
together for that particular articulatory feature to find stimulus and there was no significant interaction between
any significant difference on CAEP components between both sets of speech stimulus (p=0.135 and p=0.406). Both
groups’ response to both speech stimuli. The result of the N1 and P3 amplitudes and latencies showed a significant
2484 main effect on both speech phonological features between
control and SNHL groups (N1: p<0.001 vs p =0.015, p=0.006
vs p<0.005; P3: p<0.001 vs p=0.029, p<0.001 vs p=0.001).
A significant interaction between both types of speech
features was found. However, no evidence of any significant increment compared to standard type for both stimuli.
interaction was found for N1 amplitude (N1: p=0.075, This finding complied with that of the earlier study
p=0.038; P3: p=0.042, p=0.003). As for MMN response, when the researchers outlined the enhancement of the
significant difference was reported in amplitude and latency P2 component when targeting infrequent stimulation
for their response to /ba/- /da/ stimulus compared to the (Davies et al. 2010; Luck 2005; Steinhauer 2014).
/ba/-/pa/ stimulus (p=0.036 and p=0.004). Similarly, no
statistical significance was found in the interaction between EFFECTS OF THE FEATURES OF SPEECH ARTICULATION ON THE
both sets of speech stimulus. AMPLITUDES AND LATENCIES OF BOTH N1 AND N2
N1 component which originally arising from superior temporal
DISCUSSION gyrus region provided important aspects in performing spectral
and temporal acoustic discrimination tasks during spatial
attentional process between various speech articulation
MAIN FINDINGS features. This is proven when it’s having significant main
The primary purpose of the present study aimed to determine effects on most of the ANOVAs testing (Luck 2005;
the implication of various speech phonological features Näätänen & Picton 1987). We reported that the average
towards amplitudes and latencies of late CAEPs components mean amplitude of the auditory N1 and N2 were attenuated in
between healthy group and individuals suffering from SNHL on SNHL subjects compared to that in normal subjects. This
Malaysian Malay native speaker population. The auditory suggested the encoding deficits in auditory processing
evoked responses were successfully recorded from both study information of hearing impaired subjects. Our finding
groups. Independent t-test showed that there was no agreed with that of the previous studies where the low-level
significant difference in age between the control group and of N1 auditory response was found in subjects with
the SNHL subjects (t22df=1.4, p>0.05). In this study, only misophonia and aphasia (Becker & Reinvang 2013; Schröder
Central (Cz) electrode was selected for further analysis as et al. 2014). These similarities suggested that the center
it had the most significant effects on CAEP waveforms in auditory process of sensorineural of the subject might interfere
response to speech stimuli and it showed the highest signal to with some speech perception disorder. Specifically, the
noise ratio compared to other electrode sites (Duncan et al. deprivation situation was more prominent in the auditory
2009; Korczak & Stapells 2010; Steinhauer 2014). The Cz N1 response towards the place of articulation feature
electrode provided clearer and more stable CAEP waveforms compared to voicing contrast stimulus. In contrast, there
compared to the Fronto-electrode (Fz) on both speech stimuli were no clear justification involving the resemblance
circumvent in both experimental groups. The presentation of finding between N1 and N2 components, however may
MMN response was maximal in the Cz which was in agreement indeed reflect the reliance of cortical auditory response
with the previously reported studies (Duncan et al. 2009; towards phonologic features of speech signal (Bien et al. 2016;
Steinhauer 2014). The percentage of detectability for the CAEP Carpenter & Shahin 2013; Scharinger et al. 2016).
components including both study groups were: P3 was
present in 100%, N2 was present in 96.25%, P2 was
76%, N1 was 82.5%, P1 was present in 74.8% and IMPLICATION OF SPEECH STIMULUS ON MISMATCH
finally MMN which was present in 75% of all averages. NEGATIVITY (MMN) RESPONSE AND P3 COMPONENT
The MMN response was elicited in contrast to Malay CV
stimuli due to the presence of the language memory traces.
EFFECTS OF THE FEATURES OF SPEECH ARTICULATION ON THE This finding supported the previous decision when
AMPLITUDES AND LATENCIES OF BOTH P1 AND P2 MMN response was shown in speech stimuli and were
P1 and P2 components both elicited higher mean amplitude in enhance when the individuals having automated access to
response to voicing contrast compared to the placing contrast the native-language phonetic prototypes (Becker &
stimulus. The result of P1 component reported here fostering Reinvang 2007; Näätänen 1995). Both normal and
similarity with the previous finding where the average SNHL subjects exhibited parallel neurons activation when
amplitude elicited from the SNHL subject was higher than the producing higher negativity response and delayed
control group for both stimuli (Schröder et al. 2014). The latencies in voicing contrast stimulus. Our finding
previous study emphasized the difficulty to assess showed that the MMN response elicited by the SNHL
accurately the P1 component due to the interaction with C1 subjects was smaller (almost half of the activation) and
waves from a visual event-related potential ( VERP) component recorded at longer latencies on both CVs speech stimuli
which created a major overlapping mechanism with the P1 compared to that of the control group. In other words, the
CAEP component (Luck 2005). The deviant stimulus of difference in MMN auditory response of hearing impaired
auditory P2 experienced some amplitude subjects was found to reflect not only the detection of
speech phonologic features, but also revealed the
anomalies in physiological measure of automatic
discriminant ability involving central processing in
audition (Näätänen 1995; Näätänen & Escera 2000).
The P3 component increased dramatically when
they had 100% appearance for all the average values.
The introduction of deviant stimuli resulted in
tremendous increment compared to other components; 2485
which showed the P3 component as the most influential
element in understanding CAEPs waveforms in response The rapidly changes of formant transition during CV
to various speech phonologic features as it reflects passage supported the occurrence of higher amplitudes
an involuntary switching of passive listening to the odd and prolongation timing responses of CAEPs components
or deviant stimulus (Reis & Iório 2007). happened in voicing contrast stimulus, thus underlying
In this study, the congruence of majority of the result passive discrimination process to be a difficult task
demonstrated larger CAEP amplitudes and longer latencies for to operate. Similar study was done by Tremblay et al.
response to the voicing contrast compared to that of the (2003) where they highlighted the delayed neuronal
placing contrast across both subject groups. A plausible synchronous response of the older adult population
explanation contributed to this pattern of responses may be associated with disruption on the speech discrimination
related to the spectrum energy correlates within these sets of when dealing with time varying speech cues along /ba/
stimuli. Agung et al. (2006) expressed the domination of low- and /pa/ CVs token stimulus. Larger amplitudes and
level spectral energy in speech sound which produced higher delayed response to stimuli with longer VOT duration
N1 and P2 amplitudes with longer latencies happened on P1, were experienced by older subjects.
N1 and P2 components in comparison with the speech sound The current study utilized the CAEP technique in obtaining
having higher frequency spectral energy. One possible valuable information using dynamic methods of monitoring
explanation coincided with the spectral difference occurred on the cognitive neurological disorder related to people having
the frequency separation between formant 1 (F1) and formant sensorineural hearing loss. One of the key advantages of the
2 (F2) frequencies of voicing contrast having approximately technique is the sensitivity of CAEP signal in compensating
500-750 Hz which was narrower compared to the placing their voltage deflections at higher level processing by
contrast which having 700-1100 Hz formant separation specific experimental manipulation especially during
(Korczak & Stapells 2010; Ting et al. 2011; Wunderlich & selective attention, expectancy, passive listening and memory
Cone-Wesson 2001). This condition likely increased the updating (Duncan et al. 2009; Picton et al. 2000). This
difficulty to the brain speech discrimination on voicing indirectly helps the researcher to focus on the stages of
contrast recognition in comparison with the two consonant processing which are affected by the given experimental
contrasts, thus lead to wider activation of cortical neurons manipulation (Luck 2005; Steinhauer 2014). Besides that,
resulting in higher voltage and delayed latency recorded during the second advantage of CAEP is the capability of this potential
phonemes discriminant task. Tavabi et al. (2009) proved that activity to be measured online without the need of behavioral
the deeper part of the brain responded better towards high- response. This greatest advantage makes CAEP recordings
frequency stimulation compared to superficial region of possible even without the subject’s attention and response. For
the human cortex which responded better to low frequency this reason, the present study assesses various speech CV
information, thus indirectly supported the present finding stimulation to understand how the brain performing CAEP
on greater response amplitude. Earlier studies also reported the discriminant task between impaired and normal hearing
higher amplitude response on speech stimulus due to the broad people.
frequency spectrum in comparison with that of the tonal On the other hand, CAEP also has disadvantages
stimuli (Wunderlich et al. 2006). especially during the data collection. CAEPs are microvolt
level electrical signals that are recorded together with
Another possible explanation contributed to this finding various types of artifact and random noises. Thus, lots of
was due to the increment in onset voicing duration of the successful trials are needed to maintain the data reliability
Malay CV /ba/-/pa/ stimulus in comparison to placing contrast and accuracy. The successful trials can range from
stimulus. As shown in Figure 1, the voice onset time ( VOT) fifty to few thousands per subject for each specific
duration differed between the three syllables. Namely, CV experimental condition (Bidelman 2015; Duncan et al.
syllables /ba/ and /da/ stimuli had the same configuration of 2009; Korczak & Stapells 2010; Oates et al. 2002;
the vocal tract but differ in their VOT, as the release sound Wunderlich & Cone-Wesson 2001). This will directly
for /b/ takes a shorter time than for /d/. During the stimulus prolong the data recording process and it will be
presentation involving CV transition, there were no great unpractical for certain patient’s conditions. In this study,
changes occurred in terms of VOT for /ba/-/da/ stimulus as both 160 deviant stimuli and 640 standard stimuli were recorded
for each subject. This number is in line with requirement of
exert the same voicing pattern with negative VOT. Conversely,
the optimal CAEP recording procedure.
during the CV transition in /ba/-/pa/ stimulus, there was a great
The highly complexity, nonlinearity and non-stationary
alteration in voicing onset duration when the /pa/ syllable
waveforms characterized by electroencephalogram ( EEG)
having positive VOT (longer), in which voicing for the vowel
signals make the clinical interpretation a challenging phase.
happened after the plosive burst. These temporal cues
Several non-linear methods presented by previous researchers
properties acted as a major identification of voiced and
including sample entropy (SampEnt), higher order spectra
voiceless phoneme.
(HOS), fractal dimension (FD) and recurrent quantification
analysis (RQA) provide a better and valuable mechanism for
result interpretation (Acharya et al. 2015, 2011; Chua et al.
2011, 2009). For the last two decades, more exploration was
conducted using nonlinear dynamic
2486 method in giving potential understanding as this technique
extracts hidden complexity in the time series brain signal
(Lehnertz 2008; Mormann et al. 2005, 2003).
According to Acharya et al. (2013), higher order spectra
(HOS) method is considered as one of powerful mechanisms CONCLUSION
to justify the presence of abnormalities, besides usefulness
Our study done on local ethnic Malay population, had
in the event of signal distortion due to Gaussian noise. This
proven the significant effects of cortical auditory
framework has been persistently used in the field to
evoked potential (CAEP), in discriminating speech
study epilepsy disorder (Chua et al. 2011, 2009).
acoustical complexity with various speech phonological
In earlier studies, Babloyantz et al. (1985) used
features in people with sensorineural hearing loss. CAEP
non-linear methods such as correlation dimension ( CD)
signals appeared to be an effective way to study the
and largest Lyapunov exponents (LLE) to study the
human brain signal during the sleep cycle. Besides that, auditory processing stages and ailments related to the brain.
Song et al. (2004) used recurrence quantification The mean CAEP amplitudes and latencies for most of the
CAEP components were considerably larger and delayed in
analysis (RQA) method to scrutinize cortical functional at
different sleep stages including people suffering from response to voicing contrast compared to placing contrast.
sleep apnea syndrome. MMN was clearly elicited in both study groups which
In 2012, a group of researchers took the initiative to showed that the MMN is a suitable tool in performing
propose a method using four different entropies, i.e. behavioral change detection as well as in the attention-
approximate entropy, sample entropy, phase entropy 1, and dependent physiological measures of the human auditory
phase entropy 2, to interpret EEG signals involving epilepsy pathway. It may be easier for brain to discriminate the cues
disorder. With the application of various classifiers, of the placing contrast compared to that of the voicing
fuzzy classifier was concluded as the best technique contrast through shorter response time and lower
and the most suitable tool in performing automatic amplitude. This result is likely due to the larger frequency
detection of normal, pre-ictal and ictal conditions of spectral and longer time varying that present between
epilepsy with an accuracy of 98.1% (Acharya et al. 2012). these speech contrasts. The present finding would be
Extended idea was done by the similar author Acharya of great help to clinicians in selecting appropriate features
et al. (2015) using several non-linear methods on EEG of speech articulation that can give good response in
signal analysis for developing robust automated diagnostic evaluating passive speech perception among people with
system for depression called depression diagnosis index sensorineural hearing loss. In light of this development the
(DDI). The authors also implemented several types of research also conveys better knowledge regarding brain
classifier, which finally conclude that support vector mechanisms in discriminating various speech phonemes.
machine (SVM) as the most effective classifier in terms The outcome of presence study might be helpful for clinical
of accuracy, sensitivity and specificity. The novel diagnosis, to help further in investigate the effects of
features combination in the study proved the efficacy
central auditory processing in elderly people with
of non-linearity method in assisting medical
sensorineural hearing impairment.
professionals by developing diagnostic index tools for
measuring the severity of depression (Acharya et al. 2015).
ACKNOWLEDGEMENTS
To improve signal denoising process, Wang et al.
(2013) proposed a method called empirical mode This research was funded by the University Malaya
decomposition (EMD). The adaptation of this method Research Grant (Grant Number: UMRG RP016D-
was widely used in short inter-stimulus intervals, i.e. 13AET). The authors expressed their gratitude to all
when inverse process of overlapping between desired volunteers who participated in the experiment. The
CAEPs may occur. The authors had successfully authors declared no conflict of interests.
improved the signal-to-noise (SNR) ratio in the raw EEG
signals in order to optimize the nonstationary signals. REFERENCES
The current study used the conventional approach in Abbs, J.H. & Sussman, H.M. 1971. Neurophysiological feature
measuring CAEPs components by averaging typically detectors and speech perception: A discussion of
hundreds of electroencephalogram (EEG) signals at low theoretical implications. Journal of Speech, Language,
stimulus rate so that random noises and various other types and Hearing Research 14(1): 23-36.
Acharya, U.R., Sudarshan, V.K., Adeli, H., Santhosh, J., Koh,
of artifact were removed. This commonly used technique
J.E., Puthankatti, S.D., & Adeli, A. 2015. A novel
was in line with our present experimental paradigm since depression diagnosis index using nonlinear features in
we are using 250 ms stimulus duration with long inter- EEG signals. European Neurology 74(1-2): 79-83.
stimulus intervals (800±200 ms). To resolve the issue of Acharya, U.R., Sree, S.V., Swapna, G., Martis, R.J. & Suri,
deconvolution (inverse filtering), the standard stimulus J.S. 2013. Automated EEG analysis of epilepsy: A review.
which emerged immediately after the deviant stimulus, Knowledge-Based Systems 45: 147-165.
were excluded from the data analysis (Korczak & Stapells Acharya, U.R., Molinari, F., Sree, S.V., Chattopadhyay, S.,
Ng, K.H., & Suri, J.S. 2012. Automated diagnosis of
2010; Wang et al. 2013).
epileptic EEG using entropies. Biomedical Signal
Processing and Control 7(4): 401-408.
Acharya, U.R., Sree, S.V., Chattopadhyay, S., Yu, W. & Ang, P.C.A.

2011. Application of recurrence quantification analysis

for the automated identification of epileptic EEG signals.


International Journal of Neural Systems 21(03): 199-211.
Agung, K., Purdy, S.C., McMahon, C.M. & Newall, P. 2006. 2487
The use of cortical auditory evoked potentials to evaluate
neural encoding of speech sounds in adults. Journal of Jaramillo, M., Ilvonen, T., Kujala, T., Alku, P., Tervaniemi,
the American Academy of Audiology 17(8): 559-572. M. & Alho, K. 2001. Are different kinds of acoustic
Ali, R., Wahab, S., Hamid, A. & Rahman, A. 2013. features processed differently for speech and non -speech
Neuropsychological profile at three months post sounds? Cognitive Brain Research 12(3): 459-466.
injury in patients with traumatic brain injury. Sains Korczak, P.A. & Stapells, D.R. 2010. Effects of various
Malaysiana 42(3): 403-408. articulatory features of speech on cortical event-related
Anderson, S., Parbery-Clark, A., White-Schwoch, T., Drehobl, potentials and behavioral measures of speech-sound
S. & Kraus, N. 2013. Effects of hearing loss on the processing. Ear and Hearing 31(4): 491-504.
subcortical representation of speech cues. The Journal of Lehnertz, K. 2008. Epilepsy and nonlinear dynamics. Journal
the Acoustical Society of America 133(5): 3030-3038. of Biological Physics 34(3-4): 253-266.
Arsenault, J.S. & Buchsbaum, B.R. 2015. Distributed neural Li, Z., Gu, R., Zeng, X., Zhong, W., Qi, M. & Cen, J. 2016.
representations of phonological features during speech Attentional bias in patients with decompensated Tinnitus:
perception. The Journal of Neuroscience 35(2): 634-642. Prima facie evidence from event-related potentials.
Babloyantz, A., Salazar, J. & Nicolis, C. 1985. Evidence Audiology and Neurotology 21(1): 38-44.
of chaotic dynamics of brain activity during the sleep Luck, S. 2005. An Introduction to Event-Related Potentials and
cycle. Physics Letters A 111(3): 152-156. their Neural Origins. (Chapter 1). Cambridge: MIT Press.
Becker, F. & Reinvang, I. 2013. Identification of target Mormann, F., Kreuz, T., Rieke, C., Andrzejak, R.G., Kraskov,
tones and speech sounds studied with event-related A., David, P., Elger, C.E. & Lehnertz, K. 2005. On the
potentials: Language-related changes in aphasia. predictability of epileptic seizures. Clinical
Aphasiology 27(1): 20-40. Neurophysiology 116(3): 569-587.
Becker, F. & Reinvang, I. 2007. Mismatch negativity elicited by Mormann, F., Kreuz, T., Andrzejak, R.G., David, P., Lehnertz,
tones and speech sounds: Changed topographical distribution K. & Elger, C.E. 2003. Epileptic seizures are preceded by
in aphasia. Brain and Language 100(1): 69-78. a decrease in synchronization. Epilepsy Research 53(3):
Bidelman, G.M. 2015. Towards an optimal paradigm for 173-185.
simultaneously recording cortical and brainstem auditory Näätänen, R. 2001. The perception of speech sounds by
evoked potentials. Journal of Neuroscience Methods the human brain as reflected by the mismatch
241: 94-100. negativity (MMN) and its magnetic equivalent (MMNm).
Bien, H., Hanulikova, A., Weber, A. & Zwitserlood, P. 2016. Psychophysiology 38(1): 1-21.
A neurophysiological investigation of non-native phoneme Näätänen, R. 1995. The mismatch negativity: A powerful tool
perception by Dutch and German listeners. Frontiers in for cognitive neuroscience. Ear and Hearing 16(1): 6-18.
Psychology 7: 56. Näätänen, R. 1992. Attention and Brain Function.
Boothroyd, A. 1993. Speech perception, sensorineural hearing Hillsdale, New Jersey: Lawrence Erlbaum Associates.
loss, and hearing aids. Acoustical Factors Affecting Näätänen, R. & Escera, C. 2000. Mismatch negativity:
Hearing Aid Performance 2: 277-279. Clinical and other applications. Audiology and
Carpenter, A.L. & Shahin, A.J. 2013. Development of the N1- Neurotology 5(3-4): 105-110.
P2 auditory evoked response to amplitude rise time and Näätänen, R. & Picton, T. 1987. The N1 wave of the
rate of formant transition of speech sounds. human electric and magnetic response to sound: A review
Neuroscience Letters 544: 56-61. and an analysis of the component structure.
Chua, K.C., Chandran, V., Acharya, U.R. & Lim, C.M. 2011. Psychophysiology 24(4): 375-425.
Application of higher order spectra to identify epileptic EEG.
Näätänen, R., Paavilainen, P., Titinen, H., Jiang, D. & Alho,
Journal of Medical Systems 35(6): 1563-1571. K. 1993. Attention and mismatch negativity.
Chua, K., Chandran, V., Rajendra Acharya, U. & Lim, C. 2009.
Psychophysiology 30(5): 436-450.
Analysis of epileptic EEG signals using higher order spectra.
Oates, P.A., Kurtzberg, D. & Stapells, D.R. 2002. Effects of
Journal of Medical Engineering & Technology 33(1): 42-50.
sensorineural hearing loss on cortical event-related
Davies, P.L., Chang, W.P. & Gavin, W.J. 2010. Middle and potential and behavioral measures of speech-sound
late latency ERP components discriminate between adults, processing. Ear and Hearing 23(5): 399-415.
typical children, and children with sensory processing Picton, T., Bentin, S., Berg, P., Donchin, E., Hillyard, S.,
disorders. Frontiers in Integrative Neuroscience 4: 16. Miller, G.A., Ritter, W., Ruchkin, D.S., Rugg, M.D. &
Duncan, C.C., Barry, R.J., Connolly, J.F., Fischer, C., Michie, Taylor, M.J. 2000. Guidelines for using human event-
P.T., Näätänen, R., Polich, J., Reinvang, I. & Van related potentials to study cognition: Recording standards
Petten, C. 2009. Event-related potentials in clinical and publication criteria. Psychophysiology 37(2): 127-152.
research: Guidelines for eliciting, recording, and Picton, T.W., Lins, O.G. & Scherg, M. 1995. The recording
quantifying mismatch negativity, P300, and N400. and analysis of event-related potentials. Handbook of
Clinical Neurophysiology 120(11): 1883-1908. Neuropsychology. Vol. 10, edited by Boller, F. &
Folstein, M.F., Folstein, S.E. & McHugh, P.R. 1975. Grafman, J. Chapter 1.
“Mini-mental state”: A practical method for grading the Pratt, H., Starr, A., Michalewski, H.J., Dimitrijevic, A., Bleich,
cognitive state of patients for the clinician. Journal of N. & Mittelman, N. 2009. Auditory-evoked potentials to
Psychiatric Research 12(3): 189-198. frequency increase and decrease of high-and low-frequency
tones. Clinical Neurophysiology 120(2): 360-373.
Reis, A.C.M.B. & Iório, M.C.M. 2007. P300 in subjects with
hearing loss. Pró-Fono Revista de Atualização Científica
19(1): 113-122.
2488 Ruffini, G., Dunne, S., Farrés, E., Cester, I., Watts, P.C., Ravi, S.,
Silva, P., Grau, C., Fuentemilla, L., Marco-Pallares, J. &
Vandecasteele, B. 2007. ENOBIO Dry Electrophysiology
Electrode; First Human Trial Plus Wireless Electrode System.
Paper presented at the Engineering in Medicine and Biology
Society, 2007. EMBS 2007. 29th Annual International Wunderlich, J.L. & Cone-Wesson, B.K. 2001. Effects of stimulus
Conference of the IEEE. frequency and complexity on the mismatch negativity and
Ruffini, G., Dunne, S., Farrés, E., Marco-Pallarés, J., Ray, C., other components of the cortical auditory-evoked potential.
Mendoza, E., Ray, C., Mendoza, E., Silva, R. & Grau, C. The Journal of the Acoustical Society of America 109(4):
2006. A dry electrophysiology electrode using CNT arrays. 1526-1537.
Sensors and Actuators A: Physical 132(1): 34-41. Wunderlich, J.L., Cone-Wesson, B.K. & Shepherd, R. 2006.
Sams, M., Paavilainen, P., Alho, K. & Näätänen, R. 1985. Maturation of the cortical auditory evoked potential in infants
Auditory frequency discrimination and event-related and young children. Hearing Research 212(1): 185-202.
potentials. Electroencephalography and Clinical Ylinen, S., Shestakova, A., Huotilainen, M., Alku, P. &
Neurophysiology/ Evoked Potentials Section 62(6): 437-448. Näätänen, R. 2006. Mismatch negativity (MMN) elicited
Scharinger, M., Monahan, P.J. & Idsardi, W.J. 2016. Linguistic
by changes in phoneme length: A cross-linguistic study.
category structure influences early auditory processing:
Converging evidence from mismatch responses and cortical
Brain Research 1072(1): 175-185.
oscillations. NeuroImage 128: 293-301.
Schröder, A., van Diepen, R., Mazaheri, A., Petropoulos- Hua Nong Ting*, Abdul Rauf A Bakar,
Petalas, D., de Amesti, V.S., Vulink, N. & Denys, D. 2014. Mohammed G. Al-Zidi & Ng Siew Cheok
Diminished N1 auditory evoked potentials to oddball Department of Biomedical Engineering
stimuli in misophonia patients. Frontiers in Behavioral Faculty of Engineering
Neuroscience 8: 123. University of Malaya
Siti Zamratol-Mai Sarah Mukari, Nashrah Maamor, Wan 50603 Kuala Lumpur, Federal Territory
Syafira Ishak & Wan Fazlina Wan Hashim. 2016. Hearing Malaysia
loss and risk factors among community dwelling older
adults in Selangor. Sains Malaysiana 45(9): 1405-1411. Jayasree Santhosh
Song, I.H., Lee, D.S. & Kim, S.I. 2004. Recurrence quantification Department of Computer Engineering & Computer Science
analysis of sleep electoencephalogram in sleep apnea School of Science and Engineering
syndrome in humans. Neuroscience Letter 366(2): 148-153. Manipal International University
Stapells, D. 2002. Cortical event-related potentials to auditory 71800 Nilai, Negeri Sembilan Darul Khusus
stimuli. Handbook of Clinical Audiology 5: 378-406.
Steinhauer, K. 2014. Event-related potentials (ERPs) in second Malaysia
language research: A brief introduction to the technique, a Jayasree Santhosh
selected review, and an invitation to reconsider critical Centre for Biomedical Engineering
periods in L2. Applied Linguistics 35(4): 393-417. Indian Institute of Technology-Delhi, New
Tavabi, K., Elling, L., Dobel, C., Pantev, C. & Zwitserlood, P.
2009. Effects of place of articulation changes on auditory Delhi India
neural activity: A magnetoencephalography study. PloS Ibrahim Amer Ibrahim
One 4(2): 4452-4452. Department of Electrical Engineering
Ting, H.N., Chia, S.Y., Hamid, B.A. & Mukari, S.Z.M.S. 2011. Faculty of Engineering
Acoustic characteristics of vowels by normal Malaysian University of Malaya
Malay young adults. Journal of Voice 25(6): 305-309. 50603 Kuala Lumpur, Federal Territory
Tremblay, K.L., Piskosz, M. & Souza, P. 2003. Effects of age Malaysia
and age-related hearing loss on the neural representation of
speech cues. Clinical Neurophysiology 114(7): 1332-1343. *Corresponding author; email: tinghn@[Link]
Wang, T., Lin, L., Zhang, A., Peng, X. & Zhan, C.a.A. 2013.
EMD-based EEG signal enhancement for auditory evoked Received: 16 July 2016
potential recovery under high stimulus -rate paradigm. Accepted: 4 April 2017
Biomedical Signal Processing and Control 8(6): 858-868.

You might also like