To Move or Not to Move: Development of Fine-Tuning of Object Motion in Haptic Exploration

Recognizing objects through touch is a complex task based on integrating information coming from multiple senses and motor commands guiding the exploratory motions. To gain insight into the development of exploratory strategies in children, in this study, we addressed the question: how does exploration change when the stimulus becomes freely movable rather than fixed? We tested whether the possibility to move the object ushers in a strategic advantage, reducing the time and the number of touches necessary. We analyzed how school-aged children explore iCube, which is a sensorized cube measuring its orientation in space and contacts location. We tasked participants with finding specific cube faces; they could only touch the static cube, move and touch it, or move, touch, and look at it. Visuo-haptic performances were adult-like at seven years of age, whereas haptic exploration was less effective until nine years. The fine-tuning of object movements as a function of task constraints, e.g., the availability of the vision or blind haptic task, increased significantly with age. Shedding light on how different factors shape haptic exploration could help researchers in the pursuit of detecting the occurrence of abnormal exploratory behaviors early on during the development, providing a novel approach to detecting perceptual problems.


I. INTRODUCTION
W HEN we explore a new object to understand its shape, we integrate multiple sources of information. Vision can give us, at a glance, an idea of the object's appearance, while our hands actively explore it, providing tactile information-when the skin encounters the object surfaceand proprioceptive information related to the hand configuration. Additionally, the hands can move the object to reveal hidden parts and allow more extensive or comfortable tactile exploration. It is astonishing how our brain can intuitively integrate such a wealth of complex data through different senses and effectors, leading to a unitary percept. Exploratory action could help organizing such diverse inputs, with the efferent Manuscript  motor commands potentially providing a common reference for the combination of the different sensory information.
In fact, active manipulation is not blind to the goal of exploratory actions. Humans naturally change the way they touch an object as a function of what they want to know about it. I will hold a purse on my hand if I want to understand if it is empty or full, but I will gently stroke its surface if I want to recognize its material. Lederman and Klatzky [1] identified different exploratory procedures one may adopt to infer a variety of object features: "unsupported holding" for weight or "lateral motion" for texture, as in the purse example, but also "pressure" for hardness detection or "static contact" for temperature estimation, among others. Following their seminal work, it has become clear that "one exploration does not fit all." Rather manipulations are planned differently as a function of the feature to be evaluated, both for two-and three-dimensional objects [2], [3], and also depending on the physical features of the object [4].
Successful haptic and visuo-haptic explorations require sophisticated coordination between action and perception. This comes natural to adults to the point that no conscious reflection is required to select the best exploratory action. However, humans are not born with this trait-it is a skill. Children undergo an important developmental path involving both their sensory and their motor competences developing. Haptic perception and proprioception do not reach adult-like precision until adolescence. In particular, the haptic system is not fully developed until 8-10 years of age [5] and until adolescence the afferent proprioceptive and tactile signals are less precise than in adults [6], [7]. Also motor skills get progressively more refined with age. Already from 4-5 months of age infants start to perform reaching actions, yet not before two years of age the hand velocity patterns become adult-like [8]. The variability of the movements still decreases until about ten years of age, when also a reduction in kinematic errors during adaptive motor learning is observed [9]. In summary, between the ages of 7 and 10 years we observe important, dynamic changes in children's sensory-motor competences. The fact that neither domain has yet reached a stable state at this age might represent an obstacle in effectively integrating perceptual inputs and motor efferent plans.
Analyzing how children explore objects at different ages confirms this potential difficulty. Toddlers, from two to four years of age, do not properly actively explore objects when they must recognize them through touch, but rather either they just keep their static hand on the target or they lightly scratch it [3]. A substantial improvement is observed by seven This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ to eight years of age, when children manage to coordinate successive haptic perceptions and use a fixed point of reference to organize them [10]. Still, the necessity to combine action and perception poses a problem even at a later age. From the perspective of the difficulty of integrating motor efference and haptic continuous input, we showed that even those who are 8-11 years old exhibit a lower haptic perceptual acuity when judging surface curvature by actively moving their hand, rather than through passive motion [11]. Interestingly, for finger exploration this advantage for passive curvature discrimination is maintained also in young and older adults [12]. Difficulties also exist in selecting the most appropriate exploratory action. Cirillo et al. [13] asked 8-10 year olds and adults to haptically match angles made of wooden sticks, either selecting the stimulus sharing the same angle as the target or the same size. Adults moved their finger along the entire stimulus in the "match size" condition only, while in the "match angle," they explored the angle. Conversely, children executed all types of exploratory movements in all conditions, without differentiating them as a function of the feature for analysis. The result is that even during primary school, children continue to develop how to best combine their movements and senses to learn about the world around them.
One important component of haptic exploration is the possibility to move the object. When vision is present, this is necessary to allow for observing hidden parts. Object rotation is less fundamental for purely haptic tasks where all the parts of the object are reachable using one's hands. However, when we have an object in hand it is common to observe that we tend to move it. This is done possibly to speed up the exploration of the whole surface, and make it more comfortable for our hands and fingers to sense the contours and texture. In certain tasks, the possibility to move facilitates haptic perception. There is evidence that active exploration is more beneficial than passive haptic analysis in certain exercises (e.g., in some texture perception conditions, see [14]), and that constraining the exploration is detrimental in perception accuracy [15].
Moving the object makes exploration even more "active," in the sense that the agent is now responsible not only for his hand movements relative to the object, but also for the motion of the latter. How do children deal with this increased level of complexity? Our question is whether the possibility to freely rotate and move the object offers a strategic advantage when solving an exploratory task, reducing the time and the number of touches necessary; moreover, we are interested in whether this advantage can already be exploited during the early development. To address this question, we asked children (7-and 9-year olds) and a group of adults to explore a sensorized cube (iCube) to find specific configurations of raised pins we had positioned on the cube's faces. Importantly, in one condition, the cube was fixed and participants could only move their hands around it, while in a different set of trials, participants held the cube in their hands and could manipulate it freely.
We also analyzed the impact of vision on exploratory behaviors by comparing trials wherein we blindfolded participants with others wherein participants could visually monitor actions.
Finally, we desired to assess whether the complexity of the task to be solved influences active exploration and object movement. We confronted participants with either a relatively simple task, such as individuating a specific cube face (e.g., find the face with two pins) or with more challenging exercises, requiring a relative comparison of the properties of multiple faces (e.g., find the face with the largest number of pins). To push to the maximum the need of collecting and memorizing multiple, complex spatial information, we tasked participants with counting the number of all the pins on the cube. In issuing such a complex task, we aimed to induce participants to adopt the most efficient strategy in exploring the object and cope with the challenge of keeping track of the properties of all the different faces at once.
We assessed how participants of different ages manipulated the cube in varying tasks and under different motor and visual constraints, specifically by analyzing the patterns of touches on the cube faces and the orientations impressed on the object while a participant was exploring it.

A. Subjects
Thirty subjects took part in the experiment: ten adults (three males, age: 24 ± 3 (SD) years); ten 9-year-old (four males) and ten 7-year-old (seven males). All participants were naïve to the goal of the experiment. The Regional Ethical Committee (Comitato Etico Regione Liguria-Sezione 1) approved the research protocol, and all participants-or their legal tutors-provided their written informed consent. One 9year-old child and one 7-year-old child did not complete the test and we therefore excluded them from all analyses.

B. Device
We conducted this study using a sensorized cube designed at IIT, called iCube [16], which measures its orientation in space and the location of the contacts on its faces, then communicates this information wirelessly to a computer. The cube is about 5 cm on each side, with 16 cells per face and sends data to the PC every 348 ms (±52 ms, SD) on average. Touch sensing is based on a set of capacitive button controllers (CY8CMBR2016) developed by Cypress Semiconductor Corporation. These enable the cube to detect simultaneous touches and support up to 16 capacitive cells (6×6×0.6 mm), which can be organized in any geometrical format. Each face of iCube is made with one of these boards. Orientation estimation is based on a motion processing unit (MPU), 9-axes integrated device, combining a 3-axes MEMS gyro, a 3-axes MEMS accelerometer, a 3-axes MEMS magnetometer, and digital motion processor (DMP). The MPU combines information about acceleration, rotation, and the gravitational field in a single flow of data. The information from each iCube is sent to the computer through a serial protocol. A radio module XBEE together with an integrated circuit developed by Future Technology Devices International Ltd. (FTDI) perform the reception.

C. Protocol
Before experiment initiation, we prepared the iCube device and connected it in wireless mode to a laptop with MATLAB installed; we then positioned a set of adhesive plastic gray raised pins (diameter: 0.5 cm, height: 0.2 cm on its faces, similar to a die. An example of pins positioning is visible in Fig. 1. The number of pins ranged from 0 to 6 for each of the faces. The starting position of the cube was always the same, with its face 1 (id not visible to participants) upward and its face 4 facing the subject. The participant was comfortably seated in front of the table, where we positioned the cube on a support. When the experiment started, the subjects received the instructions-i.e., the question to be answered, and as a function of the condition, they either received the cube in their hands from the experimenter or we guided them to touch it on its support.
The experiment comprised three exercises of increasing spatial complexity: in the "Absolute" condition, participants had to find a specific face of the cube (e.g., "Find the face with five pins"). In the "Relative" condition, participants had to find the face with the maximum or minimum number of elements and in the "Count" condition, participants had to count the total number of pins on the faces of the cube. The main difference between the first two conditions is the necessity (or not) of evaluating the faces in relation to each other. In the Absolute condition, this is not needed at all, whereas the Relative condition requires a set of comparisons. In the Count task, the participant must keep track of all the faces already visited, in order not to count the same pins twice. Participants reported their decision verbally in the Count task and also by pointing at the identified face in the Absolute and Relative tasks. All tasks were tested in three different sensorimotor conditions: "Tactile," "Haptic" and "Visuo-Haptic." In the first two modalities, the experimenter blindfolded subjects during the exploration. In the Tactile condition participants could touch the cube, but it could not be rotated or moved in any way. In the Haptic condition, the cube was held in the hands of the subject and could be actively manipulated and rotated. The Visuo-Haptic condition was similar to the Haptic but subjects removed the blindfold at the beginning of the trial. In the Tactile condition, the experimenter informed participants that the supporting face had no pins on it and we left one face empty in all other conditions so as to make the task difficulty comparable. We presented adult participants with three trials per condition × modality pairing, whereas for children we shortened the experiment to two trials per each condition × modality pairing. The Count task was performed only once per modality. The experiment lasted about 30 min on average.
The task order was Absolute, Relative, and Count. We selected this procedure since the tasks were designed to be of increasing difficulty and we did not want to discourage participants, in particular in the youngest group, by facing them at first with an exercise that might have been too difficult for them. The order in which the different modalities were tested was: Haptic, Tactile, and Visuo-Haptic. This choice was made to avoid that experiencing the Tactile modality first could induce participants to feel constrained to keep the cube static also in the Haptic condition. We left the Visuo-Haptic condition to the end to minimize the impact of previous visual experience on the two nonvisual exercises. Although this design does not allow accounting for practice effects, we deemed it necessary to avoid potential disruptive influences among conditions.
Before experiment initiation, we offered subjects the opportunity to touch and explore the cube to allow for familiarization with the task. There was no time limit for exploration. This way we ensured that the device and the different exercises were well known and understood before actual experiment initiation.
One seven-year-old child and one adult did not perform the "Visuo-Haptic Count" condition. We therefore excluded them from related analyses.

D. Data Analysis
iCube raw data are extracted and processed through a MATLAB routine, which periodically interrogates the device.
1) Touches: From each of the six boards, representing the faces of the cube, the device reports a tactile map, i.e., a matrix of 4×4 elements of zeros and one, where one represents a touch. In the analysis, we considered the total number of touches on all six faces as a measure of tactile exploration. We also computed the exploration duration as the moment between the first and the last touch of the subject (manually cutting the initial phase of recording for each file, i.e., when the experimenter put the cube in the hands of the participant). Then, we divided the number of touches by exploration duration to compute touch frequency, i.e., the number of touches per unit of time.
2) Rotation: The information about the orientation of the cube with respect to its starting position is provided in the form of a quaternion, which is then converted in MATLAB into a rotation matrix to compute instantaneous rotation. In particular, one can compute the instantaneous angular variation by measuring the angle traversed over time by each of the three unitary axes orthogonal to the faces of the cube. Given one axis angle axis (t) = arctan axis(t) × axis(t − 1) axis(t) · axis(t − 1) * 180 • /π.
One can integrate the rotations the three axes perform over time to estimate the overall rotation impressed upon the cube in all possible directions. To quantify the amount of rotation, we considered the maximum value among the three axes. We then computed the instantaneous rotation speed by dividing angle axis (t) for the corresponding time interval and averaging the results across the three axes and across all the instants in trial duration in which the cube was in motion [i.e., angle axis (t) > 1•/s]. This selection was done to assess actual velocity of rotation when the participants executed the rotations, specifically without spuriously reducing the estimate with the analysis of the static phases.

III. RESULTS
To assess participants' performance, we computed the percentage of correct responses, i.e., the percentage of times participants selected the right face in the Absolute and Relative tasks and the percentage they pronounced the right number in the Count task. Fig. 2 summarizes the results. The Count task was very difficult, but especially for the younger groups (average percentage of correct responses: 11% for 7YO; 30% for 9YO; 67% for adults). All age groups performed the two simpler tasks well, particularly when exploration was supported by vision.
To better understand the impact on performance resulting from moving the object, in Fig. 3, we plot the percentage of correct answers, averaged between the Absolute and the Relative tasks as a function of exploration Modality (Haptic, Tactile or Visuo-Haptic), color coded for age group. We do not consider Count in this graph to avoid potential differences in performance due to the higher cognitive difficulty among young children posed by counting. Indeed, even in the presence of vision, children perform clearly worse than adults in the Count task. Conversely, both in the Absolute and Relative tasks, success rates reach adult-like levels in the Visuo-Haptic modality (97%). Additionally, the accuracy in the Count task is not directly comparable to the other two tasks because the chance levels are different. 9YO children show a similar average performance to adults (M: 88.6%±6% versus 95.5±2%), with a slight decrease in absence of vision. 7YO children perform better with the presence of vision.
To analyze further these results, we computed the proportion of correct trials in the Absolute and Relative tasks for each participant and we used these data as dependent variable in a linear mixed-effect model with AGE and MODALITY as factors To further evaluate whether different spatial complexity of the task at hand (TASK) and the sensory and motor exploratory modalities available (MODALITY) modified the way participants explored the object, we compared a few variables related to exploration duration across conditions, including cube rotations and number of touches normalized by exploration time. Although all participants encountered great difficulties completing the Count task, we decided to include it in the analysis of exploratory strategy. The task was designed to challenge participants and induce them to explore in a structured and more prolonged fashion. We were therefore more interested in the procedure adopted than in the results themselves. We refer the reader to [17] for a series of analyses that disregard the Count task, which will avoid the potential effect of imbalance in the cognitive challenge that the tasks pose.  time is similar among the three age groups and depends strongly on exercise complexity and the available sensorymotor modalities. A linear mixed effects model with duration as dependent variable, AGE, TASK, and MODALITY as factors and a random effect at the subject level on the intercept confirms these observations. In the omnibus model neither AGE nor any interaction reach significance (all p > 0.

B. Touch Frequency
We computed the touch frequency as the average number of touches normalized by exploration duration (Fig. 5, top row). From the graph, it is evident that adults used a smaller number of touches to explore the cube than the other two groups. A linear mixed effects model with touch frequency as dependent variable, AGE, TASK, and MODALITY as factors and a random effect at the subject level on the intercept confirms these observations. In the omnibus model neither TASK nor any interaction reach significance (all p > 0.5).
Rerunning the model with only AGE and MODALITY as factors shows that adults exhibit a significantly lower touch frequency than both children groups [ Hence, rotating the cube is associated with an increase in tactile exploration.
To assess whether the increased touch frequency in the Haptic condition was present just because of the need of holding the cube in hand, we eliminated from the analysis all touches that were steady on the same cell for a long interval of time (about 2 s) [16]. The rationale behind this approach is to assume that exploratory touches imply a dynamic motion on the surface of the cube and that if a cell has been touched continuously for such a long time it corresponds most probably to a touch associated to a holding. Fig. 5, the bottom row, reports the frequency of such shorter touches. We ran a linear mixed effects model with short touch frequency as dependent variable, AGE, TASK, and MODALITY as factors and a random effect at the subject level. In the omnibus model neither AGE nor any interaction reach significance (all p > 0.1). Rerunning the model with only TASK and MODALITY as factors shows that short touch frequency is significantly smaller in the Count task than in the Absolute one  The fact that the frequency of short touches is more similar across ages than the overall touch frequency, suggesting that the difference between adults and children could be due to a larger number of long (potentially supporting) touches of the latter. This might be also because children have smaller hands than adults do, so that there might be more contact between the hand and the cube just by holding it, while adults could, in principle, touch the cube only with the parts of the hand with highest resolution (fingertips).

C. Rotation
If we focus on the Haptic and Visuo-Haptic modalities, where participants could freely move the cube, we can then assess whether and how they exploited this possibility as a function of the task to be solved. Fig. 6 reports the amount of rotation impressed to a cube on average in the different tasks and conditions, by the different age groups.
We ran a linear mixed model with amount of rotation as dependent variable and TASK, MODALITY, and AGE as factors and a random effect at the subject level on the intercept. For this analysis, we estimated robust standard errors [18] to compensate for data heteroscedasticity.
Considering Adults, in the Absolute task, they rotated the cube significantly less in the Visuo-Haptic modality than in Concerning the average velocity of rotation (Fig. 7), we ran a linear mixed model with TASK, MODALITY, and AGE as Fig. 7. Average mean velocity of rotation for the three age groups. Same graphical conventions as Fig. 4. In summary, adults adapt their rotation speed very clearly to the sensory condition in all tasks, speeding up the object movement significantly in the visuo-haptic condition. This fine-tuning is present, but less evident in children. In fact, while adults increase their speed on average by about 300% when leveraging vision, 9YO children increase their velocity by about 100% and 7YO even less-by about 60% (see Fig. 8).
A one-way ANOVA on the percentage of change shows a significant effect of AGE [F(2, 25) = 18.76, p < 0.001, and partial η 2 = 0.60]. The percentage exhibited by adults is significantly larger than that of both younger age groups (Bonferroni post-hoc, p's < 0.001), which do not differ between each other.

IV. DISCUSSION
The results of the current study show that adults (and partially also children) fine-tune their exploratory behaviors to the task. Exploration changes when the property to be assessed varies-as established in the traditional investigation of exploratory procedures [1]. At the same time, it is modified as a function of exercise complexity and available sensory modalities. In particular, the addition of vision led to faster exploration, involving lager and much faster rotations.
The role of vision might have been particularly important for the type of exercises we selected in the current study, since vision is more precise than haptics when it comes to detecting stimulus properties, such as the spatial configuration of colored pins we proposed [19]. Different tasks focusing on alternative features of haptic stimuli-as, e.g., compressibility-have shown haptic dominance, also in the age range tested here [20] Additionally, the observation that participants exerted a number of long and short touches also in the Visuo-Haptic condition might suggest that they performed some active haptic exploration in the support of visual inspection. The analysis of the pin configuration, or at least of their number, could entail an estimate of dot-density, which can be efficiently performed also haptically [21]. However, it is currently not possible to distinguish whether the touches had merely the function to support the cube or they participated in the exploration. Future research is necessary to address this question.
The possibility to freely move the object during manipulation did not increase task efficiency, though duration and touch frequency increased while performance remained around 100% in all conditions. This adds to the previous evidence that the active exploration is sometimes not advantageous in terms of performance [22], [23]. In the current study, the increase of degrees of freedom granted by the free mobility of the cube represented more difficulty than advantage in the context of the exploratory tasks considered.
In line with this consideration, the analysis of object rotation showed that adults tended to keep the cube relatively steady in the absence of vision (Figs. 6 and 7, triangles), especially in the Count task. In the Visuo-Haptic condition, instead they rapidly rotated the cube in a variety of configurations, potentially to visually inspect and recheck all the faces. This confirms results that we observed in a different task of a similar complexity [16] (see in particular Fig. 8 ibidem, left panel). We speculate that participants adopt a "reduced rotation" strategy in the absence of vision to maintain a relatively stable frame of reference. Conversely, in the Visuo-Haptic condition, they maximally exploit rotations to gather rapidly more information.
Upon considering the development of exploratory behaviors between 7YO and 9YO children, the main differences with adults are related to haptic perception, whereas performances are very similar when vision is available. In particular, moving the object during exploration causes a main discrepancy between adults and the youngest age group. For the latter, object movability increases exploration time and touch frequency, and it also drastically interferes with performance. During active haptic exploration, it is necessary to keep track of the motion of the two hands and fingers relative to the cube to integrate the tactile and proprioceptive inputs in space and time and "reconstruct" the explored face without confusing it with the other faces. If one can move the cube as well, the fixed, absolute reference point represented by the object disappears, as it dynamically changes its orientation over time. The explorer needs, therefore, to maintain a representation of the cube and its motion in space, together with that of the (relative) movements of his effectors. Since it is only by 7-8 years of age that children can consistently use a fixed point of reference to organize perception [10], participants in the youngest group (seven-year-old children) might not yet have mastered the skill of managing a "rotating" reference point.
The possibility to manipulate the cube also highlighted a difference in the propensity of participants of different ages to finely adapt their exploratory behavior to the current exercise properties. In all tasks, adults substantially (300%) sped up the rotations of the cube when they could use vision to guide their movements. Among children, the tuning of the behavior was more limited, with 9YO children exhibiting a moderate velocity increase between the two conditions that diminished even further for 7YO children. The evident changes between seven and nine years of age in this respect suggest that such adaptation is already developing in that age range.
In general, haptic perception, even when performed on a fixed object, seems to pose more relevant difficulties than visuo-haptic exercises. In fact, even the youngest group exhibits adult-like performances in most tasks where vision is allowed, but it tends to be less successful in haptic modality (i.e., in the Tactile condition, which entailed actively exploring the fixed object). Previous evidence pointed at the late development of sensory-motor integration necessary to support active haptic exploration [11]. A potential reason for such delay is the inability to compensate for the additional noise and uncertainties generated by active motion, perhaps due to a nonmature internal model of the limb [22], [24].
Considering the explorations differences observed across ages, they could also result from the greater experience that adults and older children may have with dice type objects. However, we believe that this might have played a more substantial role in the Count task. There, the notion of summing up numbers on different faces potentially acquired through dice usage was extremely relevant. Different considerations hold for the other tasks. Since infancy, children are often familiarized with cube toys with various drawings on the different faces and with wooden or plastic cubes and parallelepipeds used as building blocks. These often show different features on their faces. For instance, the possibility to connect a block with another one is often on a specific subset of the faces and can be sensed both visually and haptically. In this respect, children are confronted with "find the specific face" task-our Absolute-and "find all the faces with a certain property"similar to our Relative task, very early during the development. Therefore, we suggest that the performances in the counting task are reflecting a large impact of different expertise, whereas the two other tasks capture more the development of the exploratory skills.
The lower performance observed in the youngest age group might also reflect a generalized inferior capacity in working memory and ability to develop strategies for problem solving. We cannot exclude this possibility, but we posit that these skills are particularly necessary when the object is freely movable. Indeed, that condition entails planning and memorizing the object motion relative to the hands, in addition to the exploratory hand motion.
Shedding light on how different factors shape haptic exploration and how this connects with the efficient perception of object properties during development could help in detecting abnormal exploratory behaviors at early ages. Indeed, this could help develop a novel approach to the detection and prevention of perceptual problems. The individuation of a criterion of optimality of exploratory behaviors would enable the development of novel training protocols designed to train children encountering difficulties acquiring exploratory skills. Learning and practicing novel exploratory strategies leads to perceptual improvements in children [25].
We also posit that employing simple noninvasive tools, such as the iCube could be used for an automatic assessment of exploratory strategies. Current approaches to the study of this topic often rely on manual annotation of videos by expert coders, a lengthy and complex procedure sometimes prone to error. The possibility to combine this approach with quantitative information about the exploration directly provided by the device might help in designing automatic processes for annotation, thereby increasing the reproducibility of the measures. This currently forms a crucial challenge in the context of haptic analysis [26], [27].