The neural basis of arithmetic and phonology in deaf signing individuals

ABSTRACT Deafness is generally associated with poor mental arithmetic, possibly due to neuronal differences in arithmetic processing across language modalities. Here, we investigated for the first time the neuronal networks supporting arithmetic processing in adult deaf signers. Deaf signing adults and hearing non-signing peers performed arithmetic and phonological tasks during fMRI scanning. At whole brain level, activation patterns were similar across groups. Region of interest analyses showed that although both groups activated phonological processing regions in the left inferior frontal gyrus to a similar extent during both phonological and multiplication tasks, deaf signers showed significantly more activation in the right horizontal portion of the inferior parietal sulcus. This region is associated with magnitude manipulation along the mental number line. This pattern of results suggests that deaf signers rely more on magnitude manipulation than hearing non-signers during multiplication, but that phonological involvement does not differ significantly between groups. Abbreviations: AAL: Automated Anatomy Labelling; fMRI: functional magnetic resonance imaging; HIPS: horizontal portion of the intraparietal sulcus; lAG: left angular gyrus; lIFG: left inferior frontal gyrus; rHIPS: right horizontal portion of the intraparietal sulcus


Introduction
Deaf individuals have been found to perform worse than hearing individuals on mathematics in general (Bull, Marschark, & Blatto-Vallee, 2005;Kritzer, 2009) and in particular on mathematical tasks with verbal requirements such as multiplicative reasoning (Andin, Rönnberg, & Rudner, 2014;Nunes et al., 2009), relational statements (e.g. less than, more than; Kelly, Lang, Mousley, & Davis, 2003;Serrano Pau, 1995) and fractions (Titus, 1995). However, it is unclear what lies behind these differences. In hearing individuals, success in mathematics requires a combination of encoding and retrieval of arithmetic facts and magnitude manipulation (Dehaene, Piazza, Pinel, & Cohen, 2003;Lee & Kang, 2002). While encoding and retrieval require access to lexical representations through verbal processing, magnitude manipulation taps into quantity processing.
Because deaf individuals display poorer performance in arithmetic tasks requiring verbal processes, but not in tasks requiring magnitude manipulations, there is a reason to believe that they use the verbal system differently from hearing individuals when performing arithmetic tasks. This may be due to the use of different language modalities, i.e. signed versus spoken language. If this is the case, deaf sign language users and hearing non-signers are likely to show, during arithmetic tasks, differential activation of the neuronal substrates of the verbal system, which are otherwise rather similar across the language modalities of sign and speech (MacSweeney, Capek, Campbell, & Woll, 2008).
In hearing, non-signing, individuals (Andin, Fransson, Rönnberg, & Rudner, 2015), we found phonological processing to be left lateralised and arithmetic to be represented bilaterally within a language-calculation network including bilateral parietal regions, with some overlap between phonological and arithmetic processing in the left hemisphere. In the present study, we use simple arithmetic tasks (multiplication and subtraction) and a phonological task to highlight the potential differences in the engagement of the neuronal substrates of arithmetic and phonology in deaf signers and hearing non-signers.
Signed languages are natural languages that are independent of the surrounding spoken languages in both vocabulary and grammar, but support the same linguistic functions (Emmorey, 2002). In particular, it has been shown that signed and spoken languages share sublexical structure that can be described as phonology (Sandler & Lillo-Martin, 2006). Phonology in spoken languages is concerned with the combination of sounds to form words and phonological processing can be invoked by judging whether the sounds occurring at the same location in two different words are similar. When the relevant sounds are at the end of the words, this is referred to as rhyme judgement. Visual rhyme judgment is a commonly used test of phonological processing ability (for a review see Classon, Rudner, & Rönnberg, 2013).
For sign languages, phonology refers to the way in which four characteristics of the signing hand; handshape, location, orientation and movement are combined in a specific sign (Sandler & Lillo-Martin, 2006). Hence, sign-related phonological processing can be invoked by asking whether two signs share one or more of these characteristics. Further, many signed languages make use of manual systems including manual alphabets and manual numerals to represent letters and digits (Brentari, 1998). The Swedish manual alphabet and numerals are based on SSL handshapes but minimise the role of movement and location (Bergman, 2012). SSL, like ASL, makes extensive use of fingerspelled words and signs (Padden & Gunsauls, 2003) and thus deaf children encounter manual systems very early in life (Bergman, 2012). This means that the manual systems are well established in Swedish deaf native signers (Andin et al., 2014). In the present study, a selection of digit letter pairs is used whose labels according to the Swedish manual systems share handshape but may differ in orientation and/or movement (Table 1). For example, although the manual numeral for the digit 1 and the manual signs for the letter L and Z differ in orientation, they share the same handshape, in SSL, and can thus be considered phonologically similar (Table 1). In an ERP-study, Gutierrez, Muller, Baus, and Carreiras (2012) have shown that when comparing different phonological parameters of sign language, the effect of handshape occurs later than location and at the same time as effects of rhymes in spoken language. This indicates that at a meta-linguistic level phonological similarity of handshape is comparable with the rhyme that occurs when the endings of spoken words are pronounced in a similar manner (see also Holmer, Heimann, & Rudner, 2016).
Phonological processing in hearing individuals activates a left-lateralised perisylvian language network (e.g. Andin et al., 2015;Hickok & Poeppel, 2007;Shivde & Thompson-Schill, 2004). Deaf signers have been shown to activate largely similar neural regions when judging whether sign labels of pictures share a location (MacSweeney, Waters, Brammer, Woll, & Goswami, 2008). This was interpreted as suggesting that phonological processing mechanisms are amodal or supramodal, at least to some degree. However, another study has found phonological tasks to activate more anterior portions of the left inferior frontal gyrus (lIFG) in deaf signers compared to hearing non-signers (Rudner, Karlsson, Gunnarsson, & Rönnberg, 2013). For spoken languages, the anterior portion of the lIFG, pars triangularis, has been suggested to be involved primarily in semantic processing, whereas the posterior portion, pars opercularis, is involved in phonological processing (McDermott, Petersen, Watson, & Ojemann, 2003). Hence, the more anterior representation of phonological processing for sign language compared to speech may reflect a closer relationship between semantic and phonological processing in signed language due to inherent iconicity (Marshall, Rowley, & Atkinson, 2013;Rudner et al., 2013;Thompson, Vinson, Woll, & Vigliocco, 2012). In particular, it may suggest that the phonological processing invoked by the tasks in the studies described above (Macsweeney, Brammer, Waters, & Goswami, 2009;MacSweeney, Waters, et al., 2008;Rudner et al., 2013) is dependent on semantic processes. To control for a potentially closer relationship between semantic and phonological processing in signed language, here we use a phonological task designed to keep semantic processing to a minimum.
According to the triple code model, numbers are represented in three different systems, the verbal, quantity and visual/attentional systems, that have distinct neural representations and are engaged differently depending on competence and the task at hand (Dehaene et al., 2003). Processing of multiplication and subtraction, which are in focus in the present study, have primarily been linked to the verbal and the quantity system. The verbal system, engages the left angular gyrus (lAG) and concerns verbal representations of numbers involved in arithmetic fact retrieval, which is normally used in multiplication. Because deaf signers generally perform at lower levels than hearing non-signers during multiplication these two groups are likely to show differential activation of the neural substrates of the verbal system during such tasks. However, it should be emphasised that the previous literature on number cognition in deaf individuals has not systematically treated the influence of language proficiency. In the present study we have matched the hearing and deaf groups very carefully and only included deaf individuals with native or native-like sign language skills, which makes for a more stringent interpretation and could result in different results with regard to the involvement of the verbal system. The quantity system is primarily involved in magnitude manipulation along a mental analogue number line and engages the right horizontal portion of the intraparietal sulcus (rHIPS; Dehaene et al., 2003). Tasks that rely on this system, e.g. subitising (i.e. the ability to rapidly judge small numbers of items without counting them), subtraction and number comparisons, are performed equally well by deaf and hearing individuals (e.g. Andin et al., 2014;Bull et al., 2005;Bull, Blatto-Vallee, & Fabich, 2006), and should thus elicit similar rHIPS activation in both groups. However, we have recently shown that number ordering elicits stronger activation for deaf signers than hearing non-signers, despite comparable behavioural performance, indicating qualitatively different processes in this region (Andin, Fransson, Rönnberg, & Rudner, 2018). It has further been suggested that different parts of the lIFG are also involved in calculation tasks related to verbal processing (Dehaene et al., 2003;Lee & Kang, 2002). However, activation in this region has also been suggested to be Table 1. Characters used in the stimuli as well as their corresponding articulation according to the Swedish manual system and Swedish. Ten phonologically similar digitletter pairs were selected according to the Swedish manual system and ten according to Swedish. These sets of pairs did not overlap. related to the sub-vocalisation or syntactic processing required to comprehend the arithmetic problem rather than calculation per se (Rickard et al., 2000). Since recent evidence suggests that phonological processing is organised more anteriorly for signed compared to spoken language , it is possible that deaf signers and hearing non-signers will show differential activation of the neural substrates in the lIFG during simple arithmetic.
The aim of the present study is to investigate the neuronal networks supporting arithmetic and phonological processing in adult deaf native or native-like signers and whether they differ from functionally equivalent networks in hearing non-signers. We predict more involvement of right lateralised parietal regions, especially the rHIPS, and less involvement of left lateralised parietal regions, primarily the lAG, for multiplication in deaf signers compared to hearing non-signers, reflecting stronger involvement of the quantity system and weaker involvement of the verbal system. For subtraction we suggest that we will find similar activation patterns for the two groups, reflecting similar involvement of the quantity system. Further, we hypothesise that if the anterior activation of the lIFG previously found for deaf signers in several studies (MacSweeney, Waters, et al., 2008;Rudner et al., 2013) represents phonological processing we will find activation more anteriorly in deaf signers compared to hearing non-signers for all three experimental tasks (multiplication, subtraction and phonology). However, if the previously found activation is instead related to semantics rather than phonology we predict no activation for phonology in the lIFG for the deaf signers, since the phonological task used in the present study avoids a semantic route to phonology.

Participants
Sixteen deaf adults (M = 28.1 years, SD = 3.44, range 21-32; eleven women) and seventeen native Swedish speaking hearing adults (M = 28.6 years, SD = 4.85, range 22-37, twelve women) participated in the experiment. All participants were right-handed, had normal or above normal non-verbal intelligence (as measured by Raven's progressive matrices) and had completed at least 12 years of formal schooling (including five deaf and five hearing with a university-level education). Participants had normal, or corrected-to-normal, vision and were screened for neurological and psychiatric illnesses. Pregnancy, claustrophobia, medications (except for contraceptives) and having non-MRI compatible metal implants were further used as exclusion criteria.
All deaf participants used Swedish sign language daily as their primary language and reported that they did not use spoken language by speaking and/or speech reading in their everyday life. Fifteen participants were deaf from birth and the sixteenth was deaf from the age of six months. Six participants were signed with from birth and the rest started their sign language acquisition before the age of two (M = 10 months, SD = 10), thus, all of the deaf participants can be considered to have a native or native-like knowledge of Swedish sign language. In Sweden, deaf children are entitled to attend deaf schools from preschool to high school. These schools have a bilingual curriculum where the acquisition of knowledge through text is taught using SSL. Further, hearing parents to deaf children are offered extensive SSL courses which, together with the bilingual curriculum taught in deaf schools has led to a favourable linguistic development for deaf children born in the 70s, 80s and 90s (Meristo et al., 2007;Roos, 2006).
All participants gave written informed consent to taking part in the study and were compensated for their time and travel expenses. The study was approved by the regional ethical review board in Linköping, Sweden (Dnr 190/05) and was carried out in accordance with the ethical standards of the Declaration of Helsinki. Results from the Swedish hearing non-signing group have been published previously, but are included here as a control group to the deaf group (Andin et al., 2015).

Experimental design
In all conditions, the stimuli consisted of three digit-letter pairs, see Figure 1. The pairs included the digits 0-9 and the letters were restricted to B, D, E, G, H, K, L, M, O, P, Q, T, U, V, X, Z, as well as the Swedish characters Å and Ö. These pairs were chosen based on the phonological characteristics of their verbal labels in Swedish and the Swedish manual systems for alphabetic and numerical signs. The pairs were constructed taking into account whether or not they rhymed in Swedish or shared a handshape in the Swedish manual systems (see Table  1). There were 10 phonologically similar digit-letter pairs according to spoken Swedish and 10 according to Swedish manual systems (see Table 1). These sets did not overlap. Thus, none of the digit-letter pairs were phonologically similar for both spoken and manual interpretations. For example, in Figure 1, the digit-letter pair L1 is phonologically similar according to the Swedish manual systems but not Swedish, whereas T3 is phonologically similar for Swedish but not the manual systems.
There were 40 unique stimuli that were used as a basis for all tasks with 20 generating yes responses and 20 generating no responses orthogonally distributed across conditions. Participants completed tasks of digit order ("are the presented digits in numerical order"), letter order ("are the presented letters in alphabetic order"), multiplication ("does one of the presented digits represent the product of the two others"), subtraction ("does one of the presented digits represent the difference between the two others"), phonology ("are the digit and letter within any of the presented pairs phonologically similar") and a visual control task ("are there two dots over any of the presented letters") (see also Andin et al., 2014). The phonology task differed superficially for the two groups. The hearing non-signers were asked to judge whether the Swedish lexical labels of the digit and letter within any of the presented pairs rhymed in Swedish and the deaf signers were asked to judge whether the digit and letter within any of the presented pairs shared a handshape according to the Swedish manual systems. However, both tasks were designed to tap into phonological processing at the meta-linguistic level. In particular, both tasks required mapping of the orthography of the character pairs presented to phonology in the appropriate language modality, and then comparing those phonological representations. Neither task could be solved without recourse to phonological representations. The other tasks were identical for both groups. Results related to the digit and letter ordering tasks are reported in Andin et al. (2018).
Participants performed four fMRI runs of 366 s each, where each run included twelve blocks with five trials in each. Each block type appears twice per run. In total, the four fMRI runs included 240 trial presentations, i.e. each of the 40 unique trials are presented once per condition. The same 40 trials were thus used for all conditions, but in randomised order for each condition. Blocks were pseudorandomised into four runs. The four runs were presented in randomised order for each participant. Each trial started with a 1000 ms interval where the cue, indicating which task to perform, was presented. The stimulus was then displayed for 4000 ms. Thus, each block lasted for 25 s. Between blocks there was a 5 s rest period, during which a ¤ symbol was presented. Participants were instructed to relax and keep still during the rest period. At the beginning of each run a blank screen appeared for 10 s before fMRI scanning started. Stimuli were presented using the Presentation software (Presentation version 10.2, Neurobehavioural systems Inc., Albany, CA) and back projected onto a screen positioned at the feet of the participant. The participants viewed the screen through an angled mirror on top of the head coil. Before the participants were positioned in the scanner, they were instructed to respond as accurately and quickly as possible during the presentation of each trial, by pressing one of two buttons using their right thumb and index finger. A professional accredited sign language interpreter was present during testing of the deaf participants and provided them with a verbatim translation of test instructions and an opportunity to ask questions if needed. During scanning, instructions were repeated orally for the hearing individuals and in written form on the screen for the deaf participants.
All participants were enrolled in a behavioural testing session (reported in Andin et al., 2013Andin et al., , 2014, at least one month before the fMRI session. During that session, they were randomised to perform two out of the four runs used in the MR-sessions. This was done to ensure task  .1, 3, 4), in letter order whether the letters are in alphabetical order (i.e. L, T, Ö; where ö is the last letter of the Swedish alphabet), in subtraction; whether a digits minus one of the other equals the third (i.e. 4-1 = 3), in multiplication; whether the product of any two digits equals the third (i.e. here no solution) and in phonology whether any of the three presented paired rhymed (for hearing participants; i.e. T rhymes with 3) or shared handshape (for deaf participants; i.e. the handshape is shared for L and 1). Tasks were blocked, i.e. one task was performed at a time.
familiarisation and compliance during scanning. There were no significant differences in performance between stimuli previously presented in the behavioural session and the new stimuli (F(1,30) = 0.263, p = .612). Before starting the fMRI session, all participants were reminded about the task and allowed to perform a practice run, with material not used in the fMRI session, until they felt confident in performing the task (1-2 practice runs were used).

Data acquisition
Functional gradient-echo EPI images (repetition time (TR) = 2500 ms, echo time (TE) = 40 ms, field of view (FOV) = 220 × 220 mm, flip angle = 90 degrees, in-plane resolution of 3.5 × 3.5 mm, slice thickness of 4.5 mm, slice gap of 0.5 mm, with enough axial slices to cover the whole brain) were acquired on a 1.5 T GE Instruments MR-scanner (General Electric Company. Fairfield. CT. USA) equipped with a standard eight element head coil, at the Karolinska Institute, Stockholm, Sweden. The initial ten seconds fixation period without task presentation were discarded to allow for T1-equilibrium processes. Anatomical images were collected using a fast spoiled gradient echo sequence, at the end of the scanning session (voxel size 0.8 × 0.8 × 1.5 mm, TR = 24 ms, TE = 6 ms).

Statistical analysis
Initially the quality of the image data was examined using TSDiffAna (Frieburg Brain Imaging, version updated 2015-02-09). Data from the first fMRI run was removed from further analyses for four participants (three deaf and one hearing) who moved more than 3 mm in at least one direction. Thereafter, all image data were pre-processed and analysed using statistical parametric mapping software (SPM8; Wellcome Trust Centre for Neuroimaging, London, UK) running under MatLab r2010a (Math-works, Inc., Natick, MA, USA). Preprocessing included realignment, coregistration, normalisation to the MNI152 template and spatial smoothing using a 10 mm FWHM Gaussian kernel, following standard SPM12 procedures.
To adjust for potential non-compliance, blocks with more than two incorrect answers were discarded from the analysis. Two multiplication block (from one participant) and seven phonology blocks (from five different participants) were removed from the deaf group and one subtraction and two phonology blocks (from two different participants) were removed from the hearing group. Data from one hearing participant were removed due to artefacts probably caused by metallic hair dye, thus, data from sixteen participants from each group were included in further analysis. Brain activation pattern analysis was conducted by fitting a general linear model with regressors representing each condition as well as the six motion parameters derived from the realignment procedure and response time as a covariate. At first level analysis, statistical parametrical map images pertaining to contrasts between each of the three experimental tasks (multiplication, subtraction and phonology) versus the visual control task, were defined individually for each participant. These contrast images were thereafter brought into second level analysis where onesample t-tests (i.e. task versus visual control) were performed separately for each group and thereafter into an independent t-test for group comparisons. Activation is considered as significant if p fwe < .05 at peak level, but for clarity and visualisation purposes clusters are shown for p fwe < .05 at cluster level in both Table 3 and Figure 2.
To further investigate our brain region-specific predictions, region of interest (ROI) analyses were performed. The four regions of interest, including the anterior part of the left inferior frontal gyrus (lIFG-BA45), the posterior part of the left inferior frontal gyrus (lIFG-BA44), left angular gyrus (lAG) and the right horizontal portion of the intraparietal sulcus (rHIPS), were defined using the cytoarchitectonic probability maps from the Anatomy Toolbox in SPM12 (Eickhoff et al., 2005). For each participant, mean voxel values from each ROI was obtained separately for each of the three contrasts (experimental task > visual control task), again using the Anatomy Toolbox in SPM12. To investigate if there was significant activation for either of the two groups within any of the individual ROI, the mean voxel values were compared in separate one sample t-tests, one for each group. Finally, a set of independent t-tests was calculated to investigate group differences within ROI.
Analysis of in-scanner response time and accuracy data as well as ROI mean voxel values was performed using SPSS statistics 22 (IBM, SPSS Statistics, version 22, IBM Corporation, New York, USA). Response time and accuracy data were analysed using independent t-tests for each task. All t-tests were two-tailed with a significance level of p < .05.

Behavioural results
There were no differences in response time between groups on any of the tasks (Table 2). Accuracy was significantly lower in deaf signers compared to hearing nonsigners for phonology, whereas there were no differences in accuracy between groups for multiplication and subtraction.

Whole brain analyses
Group-specific activations for each of the three experimental tasks (multiplication, subtraction and phonology) compared to the visual control task are presented in Table 3 and in Figure 2. In general, the hearing group showed an apparently more widespread activation pattern than the deaf group.
For multiplication and subtraction there were no significant peak activation for the deaf group. For the hearing group, significant peak activations were found in areas not covered by the probabilistic map, but close to the left middle occipital gyrus and left hippocampus for multiplication compared to visual control. For subtraction compared to visual control, significant peak activation was found in left inferior and superior parietal lobule for the hearing group. At cluster level there were significant clusters for the deaf group in the right intraparietal sulcus and left inferior frontal gyrus (pars opercularis) for multiplication as well as in the occipital gyrus for both multiplication and subtraction. For the hearing group, significant clusters were found in bilateral cerebellum, left parietal and frontal areas for multiplication and in bilateral parietal areas and left middle frontal gyrus for subtraction.
The phonology compared to visual control contrast revealed significant peak and cluster activation in the left occipital lobe for the deaf group. In the hearing group significant activation at peak and cluster level was found in a left lateralised fronto-parietal network, as well as in the cerebellum bilaterally.
Comparison of activation between groups for each of the three contrasts revealed no statistically significant differences. However, apparent differences in the pattern of activation are visible in the more liberally  thresholded Figure 2, and these were then further investigated within the a priori regions of interest.

Regions of interest
To investigate the specific predictions, mean activation was investigated within the four regions of interest (ROI). Mean ROI activation and group comparisons are reported in Table 4 and visualised in Figure 3. The right horizontal intraparietal sulcus (rHIPS) was significantly activated for subtraction compared to visual control in both groups and there was no statistically significant difference in activation between groups for this contrast. For multiplication, the rHIPS was significantly activated for the deaf signers but not for the hearing non-signers and there was a statistically significant difference in activation between groups, with stronger activation for the deaf signers. The left angular gyrus (lAG) was significantly less activated for all the experimental tasks compared to the visual control task in both groups, and there were no differences in activation between groups in this ROI.
In order to establish that this pattern of activation in the lAG represented a deactivation relating to the experimental tasks rather than an activation during the visual control task, we also compared activity during the three experimental tasks and visual control task to activity during rest. We found that all three tasks as well as the visual control resulted in deactivation of the lAG compared to activity during rest. In the pars triangularis of the left inferior frontal gyrus (BA45), both groups showed significant activation for multiplication and phonology compared to the visual control, but neither group showed significant activation for subtraction compared to visual control in this ROI. There were no significant group differences for any of the contrasts in BA45. Both groups significantly activated the pars opercularis of the left inferior frontal gyrus (BA44) for multiplication and phonology compared to visual control. For subtraction only hearing non-signers showed significant activation in BA44, but there were no significant differences in activation between groups for any of the contrasts in this ROI. Notes: Peaks and clusters FWE corrected values at p < .05 are included in the table. Brain regions are based on the cytoarchitectonic probability maps of the Anatomy Toolbox in SPM12. a Peaks were located to areas not mapped in the atlas. The closest mapped area is given in parentheses. Notes: One sample t-test indicates whether the respective area is significantly activated (compared to the visual control task) for each group. One-way ANOVA with response time as covariate is used for group comparisons. Figure 3. Activation within ROI's. Outline of the four regions of interests together with ROI mean voxel values for each of the three tasks (multiplication, subtraction and phonology) minus visual control, presented by group. Error bars represent SEM. ROI's are defined using the cytoarchitectonic probability maps from the Anatomy Toolbox in SPM12 (Eickhoff et al., 2005).

Discussion
This study is the first to investigate the neuronal substrates of arithmetic in deaf signers. We show that there are similarities between the neuronal substrates of arithmetic for deaf signers and hearing non-signers. However, although whole brain between-group contrasts did not reveal any significant differences, the ROI analyses did. In particular, deaf signers compared to hearing non-signers showed stronger activation in right intraparietal sulcus (rHIPS) for multiplication. Because of our well-controlled design, using the same stimuli for all tasks including visual control, there was generally little activation that survived the family-wise error correction for the deaf signing group. The spread of statistically significant activation was larger for the hearing group. A likely explanation of the less extensive activation in the deaf group is a larger degree of variability compared to the hearing group, leading to less robust activation patterns at the group level. This is a typical finding for fMRI studies of deaf signers (e.g. Corina, Lawyer, Hauser, & Hirshorn, 2013). Therefore, for clarity, results were presented at cluster level, together with peak level activation.

Arithmetic processing
The primary purpose of the present study was to investigate neuronal networks supporting arithmetic processing in deaf signers and contrast them with the corresponding networks in hearing non-signers. At whole-brain level there was little significant activation for the deaf signers and there were no significant differences between groups. However, region of interest analyses ( Figure 3) showed that for the deaf signers both multiplication and subtraction generated significant activation in the rHIPS, whereas for hearing non-signers, only subtraction elicited significant activation in this region. The significant activation seen for hearing non-signers in rHIPS for subtraction but not for multiplication is in line with the triple code model (Dehaene et al., 2003). This model, which is based on data from hearing individuals, proposes that subtraction requires magnitude manipulation along the mental number line, a function supported by rHIPS, whereas multiplication requires arithmetic fact retrieval processes that are supported by language related brain regions in the left cerebral hemisphere (Dehaene et al., 2003). The significantly stronger activation elicited by multiplication for deaf signers compared to hearing non-signers indicates that deaf signers rely on magnitude manipulation for solving multiplication tasks to a larger extent than do hearing non-signers. Importantly, there were no significant differences in either response time or accuracy between the two groups on the arithmetic tasks. Thus, while the neuronal activation pattern suggests differential engagement of quantitatively different mechanisms between groups, the behavioural pattern suggests that this does not take place at the expense of either speed of accuracy. Further, we have recently shown, in a sister study using the same stimuli as here, that deaf signers activated rHIPS during a number ordering task while hearing non-signers did not (Andin et al., 2018). Taken together, these findings point towards a partly different role of rHIPS in digit processing for deaf signers compared to hearing non-signers. This challenges the universality of the triple code model (Dehaene et al., 2003).
The triple code model highlights the importance of left angular gyrus (lAG) and its linguistic function of arithmetic fact retrieval for solving multiplication (Dehaene et al., 2003). This notion is supported by several empirical studies (for a review see Seghier, 2013). Contrary to the reviewed literature and our prediction, we found a significant deactivation of the lAG for both the subtraction and multiplication tasks compared to both visual control and rest. This applied to both deaf signers and hearing non-signers. Recent studies have shown that deactivation of the lAG may reflect the difficulty of the task at hand, rather than the operations upon which it depends, as processing resources are deployed to other regions of the brain (Seghier, Fagan, & Price, 2010;Wu et al., 2009). Further, there was no significant difference in lAG deactivation between groups, rendering no support for our hypothesis of stronger activation related to arithmetic fact retrieval for hearing nonsigners.
The other part of the verbal system suggested to be involved in arithmetic processing, investigated by the ROI analysis in this study, was the left inferior frontal gyrus (lIFG). This region is associated with verbal processing of arithmetic (Dehaene et al., 2003;Lee & Kang, 2002). We show that both groups significantly activated this region for multiplication. However, for subtraction, only hearing non-signers showed activation, and this only reached significance in the pars opercularis of lIFG (BA44). There were no significant differences in activation between groups for either of the arithmetic tasks in either part of lIFG. This pattern of results indicates that verbal processes are involved in multiplication for deaf signers as well as for hearing non-signers. As regards subtraction, however, we found no direct evidence of lIFG involvement for deaf signers.
Summing up, the results pertaining to arithmetic processing suggest that deaf signers make use of both verbal processing and the quantity system for multiplication, whereas they mainly make use of the quantity system during subtraction. Hearing nonsigners, on the other hand, make use of both verbal processing and the quantity system for subtraction, whereas they solve multiplication mainly by verbal processes. Thus, while the results obtained for hearing nonsigners are in general agreement with the triple code model (Dehaene et al., 2003), those obtained for deaf signers are not.
It should be noted that the careful matching of participants in the two groups, could be the reason for the lack of significant behavioural difference between groups on the arithmetic task. This distinguishes this study from previous work showing that deaf individuals often perform worse than hearing individuals on arithmetic in general (Kritzer, 2009) and specifically on multiplication (Andin et al., 2015;Nunes et al., 2009). The deaf participants in the present study represent a population for whom educational opportunities have been optimised by extensive support for their language development and use (Bagga-Gupta, 2004), which is likely to have supported their arithmetic skills. Thus, the present results demonstrate that deaf individuals, who are proficient in signed language, can perform mental arithmetic just as successfully as hearing non-signers, yet do so by making use of qualitatively different processes, with higher reliance on brain regions supporting magnitude processes.

Phonological processing
A secondary purpose of the present study was to investigate phonological processing networks in deaf signers and their potential overlap with arithmetic processing networks. Previous work has shown that phonological processing activates the lIFG in deaf signers in much the same way as in hearing non-signers (MacSweeney, Waters, et al., 2008;Rudner et al., 2013). We have previously shown activation in the classical left-lateralised language network for phonological processing using the same stimulus material and the same group of hearing non-signers as in the present study (Andin et al., 2015). The current results, however, showed no significant activation of the lIFG at whole brain level for deaf signers during the phonological task. Indeed, for this group, the phonological task generated significant activation in one region only, the middle occipital gyrus. The ROI analysis, on the other hand, did show significant mean activation in both portions of the lIFG. Furthermore, this activation did not differ significantly between groups. These findings are in line with previous studies showing activation of the lIFG for deaf signers during language processing in general (Horwitz et al., 2003) and phonological processing tasks in particular (MacSweeney, Waters, et al., 2008;Rudner et al., 2013). Because our phonological processing task specifically avoids a semantic route to phonology, the results of the present study support the notion that phonological processing is a metalinguistic function that is at least to some extent independent of the surface characteristics of the task. This interpretation is further supported by the findings in the present study of significant activation of the lIFG for deaf signers during the multiplication task, which we have argued represents engagement of verbal processing.
It is worth noting that the lack of between-group difference in activation of the lIFG during phonological processing persisted despite the fact that the deaf signers were less accurate (although not slower) at solving the task. Poorer accuracy on the phonological task may be due to deaf individuals having less practise than their hearing peers in explicitly accessing the phonological representations of their native language, possibly as more emphasis is placed on manipulating the phonology of speech-based than sign-based language even in educational settings with a bilingual curriculum. The interpretation could also partially explain the activation of the visual cortex for the deaf signing group during the phonological task; in particular, it may reflect a compensatory visually based strategy in this group, possibly related to character identification (cf. Rudner et al., 2013).

Conclusion
We found that a sample of deaf individuals who have had good educational opportunities and support to develop their signing skills did not perform worse than hearing individuals on simple multiplication and subtraction. Nonetheless, investigation of the neuronal networks that supported their arithmetic and language processing suggested the possibility of different strategies. In particular, ROI analyses revealed that deaf signers had stronger activation in the rHIPS compared to hearing nonsigners during multiplication, suggesting specific engagement of magnitude manipulation strategies via the quantity system. Further, we found evidence that during phonological processing and multiplication deaf signers engage the lIFG in a manner similar to hearing non-signers. Taken together, this pattern of results shows that deaf signers can perform arithmetic tasks just as successfully as hearing non-signers. However, the brain regions recruited are partially different, at least as regards multiplication. Future research should disentangle the effects of deafness and sign language experience on magnitude manipulation as a strategy for solving multiplication tasks.

Acknowledgement
Thanks to Shahram Moradi for technical assistance and to all participants who gave generously of their time.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
The work was supported by funding from the Swedish Research Council [grant number 2005-1353], [grant number 349-2007-8654].

Data availability statement
The data that support the findings of this study are available from the corresponding author, JA, upon reasonable request.