A Prototype System for Real-Time Monitoring of Arctic Char in Indoor Aquaculture Operations: Possibilities & Challenges

In this exploratory study, we studied and qualitatively evaluated a prototype video data collection system to capture and analyze fish behavior in a small-scale indoor aquaculture operation. The research objective was to design and develop a hardware / software system that would have the potential to capture meaningful data from which to extract fish size, swim trajectory, and swim velocity, ultimately as information toward an assessment of fish health. The initial work presented in this paper discusses the development choices of the prototype system, including various combinations of lighting and camera positions both inside and outside of the aquaculture tanks, and several post-processing techniques to isolate fish in video, calibrate the distance from camera to fish through water, and infer fish trajectories and swim velocities. Preliminary results provided a qualitative assessment of such a system. Specific results on the system’s ability to detect fishes’ positions, trajectories, and velocities are presently limited to observational outcomes and descriptive statistics rather than large-scale quantitative analysis. The present work lays a foundation for a future commercially hardened system that would be required for the collection of larger datasets, which would in turn facilitate the future development of machine learning (ML) algorithms to begin to statistically correlate data to fish conditions and behaviors in near-real time.


I. INTRODUCTION
Aquaculture has grown into an $800M annual industry in Canada, with thousands of employment opportunities directly related to rearing and holding fish, and indirectly related to associated supply and services industries [1]. The majority of Canadian fish farms are found in the coastal regions where the installation of sea cages can reduce operational costs [2]. Historically, the environmental footprint is known to have been high; issues ranged from impacts of nutrient concentration, antibiotic use on the ecosystem around the cages, and escapees displacing natural stocks [1], [3]. Only a few aquaculture facilities are found inland, and a recent Canadian study highlighted ongoing challenges with fully recirculating aquaculture systems [2], [4].
The associate editor coordinating the review of this manuscript and approving it for publication was Andrea F. Abate . Myera Group in Manitoba, Canada, is unique in developing aquaculture systems for a full-scale Arctic Char aquaculture operation that is fully recirculating, has a minimal environmental footprint, and can be operated in remote regions. Arctic Char is a key traditional subsistence species and an important source of income to the Inuit of Canada [5]. Evidence suggests that Arctic Char raised in closed-tank recirculating systems have minimal impact on local environments [6], and it has received significant attention as a potential species for Canadian commercial aquaculture [7].
Myera's modular tank design supports targeted feeding, medication, and environmental sensing and control. In any aquaculture operation, objectives include reducing mortality (particularly of high value broodstock) and optimizing fish quality and traits throughout the grow-out process. Risks to fish include improper gas, waste, nutrient, or pH balances in the water, temperatures that are too warm or too cold, and VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ disease. Combined with high stocking densities typical of indoor aquaculture, problems can quickly escalate to pose a threat to significant numbers of fish. Consequently, this work was the development of a system to monitor fish behaviors as an indication of fish health. It is a precursor to an ultimate objective of remote monitoring and management through real-time video capture and processing. The current research objective was to design and develop a prototype hardware / software system that would have the potential to capture meaningful data from which to extract fish size and swim velocity. The work presented in this paper discusses the development choices of the prototype system and the qualitative assessment that such a system is possible in a small-scale indoor Arctic Char aquaculture facility. The research challenges included determining the most appropriate selection and combination of video capture equipment and lighting requirements. An additional challenge involved the development of image processing algorithms to successfully identify fish that vary from 5 cm to 45 cm in size over their life cycle, where the size variability within a tank may be as high as a factor of two, and where the water is turbid. Fish density is typically 40 kg/m 3, which at full grow-out stage translates to 90 fish in the tank of 1.2 m depth and 1.8 m diameter.
The work identifies a number of complexities that need to be addressed before a commercially-hardened system and real-time analysis is feasible. Preliminary results are provided on the system's ability and associated limitations in detecting fishes' positions, trajectories, and velocities. However, these are limited to observational outcomes and descriptive statistics of system performance rather than large-scale quantitative analysis. For the latter, a commercially hardened system that addresses some of the limitations outlined in this work and the collection of larger datasets would be required Ultimately in a commercially-hardened system, fish count, swim velocity, and swim orientation (belly up or belly down BU/BD) are of interest to aquaculture operators. BU/BD orientation is correlated to fish color, since bellies are generally lighter in color than backs.
The current exploratory study will also lay a foundation for future development of machine learning (ML) algorithms to begin to correlate data to fish conditions and behaviors in near-real time. Fish behaviors, like those of other animals, tend to be predictable. Once recorded, the interpretation of certain behaviors is straight-forward. For example, BU/BD orientation as determined by fish color in the video image is correlated to fish health. Further, fish also change their coloration with aggression and submission behaviors and their place in the population hierarchy [8]. Analytics can provide insights into behaviors that may indicate poor health or environmental conditions, to better understand fish response to various environmental conditions, and mitigate risk through early detection..

II. BACKGROUND
Other studies have investigated underwater fish tracking to provide insights into the sensitivity of fish within an environment, which in turn may help promote the welfare of farmed fish. Basic farm fish welfare is usually assessed on readily observable behaviors and conditions, including feed intake, swimming activity, and ventilation [9]- [12]. In one study, researchers used a camera mounted on a marine craft to track fish based on the connected components on a fish's body in each frame [13]. Their tracking of Large Mouth Bass, while mobile, is founded on detecting distinctive dark lines that mark this particular species' body and tail. Their work was limited to side views and did not differentiate fish by size. In a similar study, researchers used the covariance algorithm between the frames to identify multiple fish in each frame [14], without the use of any lighting system and limited tracking to a side view in each frame. The covariance algorithm was inspired from vehicle and pedestrian tracking algorithms [15], [16].
In other studies, ultrasonic systems have been developed, but only obtained limited information on the fish [9], [17], [18]. Improving the measurement frequency and applying new methods like broadband acoustic systems were used to discriminate between different fish species and provided more data from individual fish [19]. However, these systems are costly and need considerable area for installation. Therefore, a research objective has been to find a solution to obtain meaningful data via low-cost, non-specialized, static underwater cameras.
Underwater imaging can be done using different wavelengths. As an example, researchers have attempted to study the presence, size, and behavior of fishes in water under ice using infrared (IR) cameras and white light [20]. Moreover, studies showed that the behavior of the fish differs when exposed to different wavelengths. In one study, the effect of the wavelength and the intensity of the artificial light source were investigated on two different species [21]. It was shown that the fish tended to congregate around violet and red artificial light sources compared to other colors. Also, the intensity of the artificial light was found to affect the behavior and the distance that the fish tended to keep from the light source. Furthermore, the study revealed that the studied species were sensitive to most of the white light. In other words, illuminating the area with white light affected the natural behavior of the fishes and led to unreliable fish tracking results. These observations were confirmed anecdotally in the present study as well.
Several recent studies confirm the complexity of the problem of tracking fish due to combinations of factors including fishes' body deformation, variable sizes and appearances of fish across the studies, fishes' complex motion, frequent occlusion, and tracking multiple animals simultaneously while preserving individual identities. These factors similarly make it difficult and potentially misleading to compare studies directly [22]- [24].
In this study, we developed a lighting-augmented video capture system to illuminate the fish basin and record video simultaneously. Afterwards, by employing object recognition at each frame, we developed algorithms to track multiple fish in each frame with the objective to estimate their size and their distance from the camera. The system was designed for use in-situ and allowed fish to maintain their normal swim patterns in the tank.

III. MATERIALS AND METHOD
This part consists of two sections, outlining the hardware configuration for video capture in the first section and the algorithms and software for video post-processing and analysis in the subsequent section.

A. UNDERWATER LIGHTING AND VIDEO RECORDING SYSTEM
The intent of the study was to develop a low-cost prototype from off-the-shelf components with which to investigate the most promising combinations of lighting and camera positions. The follow-on objective was to use this work to form the basis for a future commercially hardened system. The tanks contained Arctic Char at various grow-out stages, ranging from 5 cm to 45 cm in length. The tanks were 1.8 m diameter and 1.2 m deep with 20 cm freeboard.
A waterproof HD GoPro TM Hero4 Session was used to record video at 30 frames/s with a resolution of 1920 x 1440 pixels. Video segments averaged 10 min in length and were recorded to an onboard SD card. The SD card capacity became an early limitation of the overall prototype. Off-site and at a later time, frames were extracted for post-processing and analysis. Numerous combinations of the following camera positions and lighting were considered: • Camera at full depth (bottom) at the center of tank; • Camera at half-depth on the sidewall of the tank; • Camera at the surface (5-10 cm submerged) at the center of tank; • Camera above the surface (5-10 cm above the surface) at the center of the tank. An initial effort confirmed that in-tank lighting would be essential to augment ambient room lighting. Waterproof Red-Green-Blue (RGB) LEDs ribbons with a remote color and illumination controller were used to investigate video quality at various lighting colors and intensities. All camera positions were combined with each of the lighting options, which included the lighting ring at the bottom of the tank and the lighting rink at surface of the tank. Additionally, the above lighting positions were iterated with various colors of light and with ambient (room) lights on and off. This was an exploratory stage to visually observe the opportunities and limitations of various combinations of lighting (ambient and directed) and colors of lighting. Small amounts of data were collected with most combinations, but were not systematically analyzed and compared, as the intent was to narrow the focus to one or two combinations for further exploration, which direct observation achieved.
The lighting and camera were mounted on a ring configuration assembled from flexible potable-water-grade tubing (PEX piping) (Fig. 1a). The camera could be easily removed to detach it from the lighting. There were two ribbons of lighting installed on the ring, of which one or both could be illuminated to vary the overall intensity of lighting in the tank. Fig. 1b shows a sample position of the lighting and camera at the bottom of a tank. A remote controller for the lighting allowed for color changes during video recording, making it possible to change the lighting color during the video recording.
Observationally, red backlight was found to generate the best video quality to discriminate between fishes, and between fish and debris in the tank ( Fig. 2) with no observable agitation in the fish. Related literature suggests that fish are less sensitive to red light than to other wavelengths which may cause stress for the fish inside the tank, depending on their distance to the light source [21]). Observationally, other colors were observed to cause agitation in the fish, with white and blue-white light being the most obvious and thus immediately eliminated from further consideration. Fig. 2a demonstrates that normally the data in each frame are distributed within all RGB channels. Fig. 2a was captured with overhead room lighting only (white) and no additional light source in the tank. Fig. 2b demonstrates that the red color channel still contained all the fishes' frame information when the red backlight in the tank was the only light source, including the absence of ambient room light. VOLUME 8, 2020 The preliminary experimentation with hardware setup relative to combinations of lighting (color, position) and camera (position) demonstrated several challenges. First, typical aquaculture tank density makes it difficult to obtain video that allows one to distinguish more than two 'layers' of fish away from the camera. This was evident even at low stocking densities. Second, even with lighting added to the tanks, turbidity makes it difficult to visually distinguish between fish and water in the tanks, and certainly between fish back and fish belly, as well as objects more than approximately 30 cm away even at a very low stocking density. Third, positioning the camera underwater limits options for the camera power supply and data transfer. The current study limited itself to storing data on the camera's onboard SD card which was a significant limitation on the amount of video one could capture (less than 30 minutes), whereas a camera located in a dry location would provide more options for camera power and wireless or wired data transfer in real time. Finally, the hardware prototype materials were also chosen with biosecurity considerations to ensure the hardware setup would not introduce harmful materials or interactions into the tank. Specifically, the lighting setup was installed on a frame constructed or potable plumbing components (PEX tubing, brass fittings). The entire setup was rinsed in the disinfecting solution provided by the facility before and after being submerged into the fish tanks.
Future iterations will fully consider a wide-angle camera located at the center bottom or center top of the tank and/or multiple cameras at different locations in the tank. Similarly, future iterations will consider the lighting ring augmented with tank sidewall lighting. If dry (non-submerged) lighting is a priority for power considerations, then sidewall lighting may require ports to be installed in the tanks.

B. CAMERA CALIBRATION
As a basis for estimation of desired fish parameters (size, velocity), the camera was calibrated in air and water. For calibration, the camera was placed in front of a calibration chart (Fig. 3) at distances of 100 cm, 50 cm, 30 cm, 20 cm and 10 cm respectively, in both air and water. Then, the Euclidean distance D E of two specific pixels were calculated in the image based on (1): where x pi and y pi are the horizontal and vertical positions of two specific pixels in the images. Subsequently, the actual geometrical distance of two specific pixels were measured from the chart, from which one can estimate the length of each pixel. A similar process can be applied based on the actual area of a circle (i.e. a sector) and the number of the pixels in that sector, in order to determine the actual size of a pixel at each distance to generate similar results to the first calibration method. The results of the calibration procedure are discussed in the Results section.

C. VIDEO IMAGE PROCESSING
After capturing video, frames were segmented from the recording. In the first step of processing, the objective is to identify the fish in the image. To that end, a proper mask needs to be calculated, requiring the background to be removed from each frame. This also helped remove objects not under direct study (air bubbles for tanks with pumps running, debris). To do this, the average of the red submatrices of 10 consecutive frames was calculated and subtracted from each frame initially. Subsequently, it was assumed that the movement of the mask centroid roughly corresponds to the movement of the fish. Also, the size of a specific mask can be considered the size of a fish, corrected for the calibrated camera distance from camera to fish.
To calculate a mask for the fish at each frame, the histogram of the red channel of the frame was adjusted. Afterwards, the binary threshold of the image was calculated based on the global Otsu algorithm [25]. Fig. 4 shows how adjusting the histogram improved the quality of the generated mask, illustrating that the number of the fish are more readily detected in an adjusted image compared to the unadjusted one. The result of this thresholding displayed some turbidity and unwanted spots in the water. Therefore, to have a clean mask, the image was morphologically opened and closed with a 2 pixel diameter disk respectively. Some of the impurities had a larger area and could not be eliminated by the morphological filter. To address this issue, the area filter and shape filter were applied on each frame. In the simplest case, an assumption of a single fish is made and no overlapping of the fish masks is expected. More realistic scenarios are discussed in a later section of this paper (e.g. Fig. 9).

D. SIZE AND SHAPE FILTER
In the size filtering procedure, after eliminating the object on the boundary of each frame by labelling, the frame was labelled again to find the remaining fish inside the frame. Then the number of the pixels for each component was counted. If the number was less than a certain threshold, the component was eliminated from the frame as a non-fish object, in this case 5000 pixels. The size threshold is flexible and depends on the quality of water, size of fish and the quality of the images at each frame. To design a shape filter, the variation of the distances of the boundary pixels relative to the center of a connected component was considered for each component. The center of a connected component was calculated from (2).
where (x,ȳ) is the coordinate of the center point, (x i , y i ) is the coordinate of the pixels with the same label and N is the number of the pixels of a connected component. In former studies, it was shown that in a spindle-shaped object (which is the ordinary shape of a fish), the boundary pixels' distances vary significantly relative to the center of the component [26], [27]. Therefore, if the variation of the boundary pixels' distance with respect to a component center was less than a certain threshold, that component was considered not to be a fish and was eliminated from the frame. The threshold was correlated to the known size range of fish in a given tank, which in turn is correlated to fish age. Fig. 5 shows the results of the size and shape filtering. Although shape filtering assisted in cleaning the mask, it slightly increases the running time of the code, which nonetheless runs in approximately 11 ms to process and extract data from each frame. Whereas run times are highly variable depending on hardware and operating systems, the result demonstrates that the data can be processed very efficiently.

E. OBJECT TRACKING THROUGH SUCCESSIVE FRAMES
After preparing a mask for each frame, each object was tracked from frame to frame. From one frame to another, the position and number of fish in a frame may change. Therefore, the labels of a given frame are not necessarily applicable to the next frame and the label of a specific object may differ. To track objects in different frames and to predict the path of a given object, typically a Kalman filter is employed [28]. However, implementing a Kalman filter for multiple objects is a more complicated programming problem and more computationally intensive, particularly for a fully scaled-up system. Therefore, we used a Manhattan distance (D M ) in this study. The Manhattan distance is the absolute distance of the center of each object horizontally and vertically with (3).
where (x i , y i ) and (x j , y j ) are the two centers of mass of two different masks. A limitation of this approach is that it treats diagonal motion differently than horizontal or vertical motion, with no physical significance. When considered VOLUME 8, 2020  together, Euclidean distance D E and D M allow more meaningful inferences than either measure considered on its own. D E infers that an object moved, and D M implies that it was a given fish that moved from position i to position j. If the D M and the D E of two separate points in two successive frames are below a specific threshold, it was assumed that these two points belong to an entity representing a single fish. In this study D M and D E were set at 500 pixels. A known limitation this approach is that, if two fish are moving in the same direction (one 'behind' the other, relative to the camera), the assumption would be violated. Fig. 6 summarizes the algorithm to detect and track the fish in different frames. A future consideration is whether one can generate an ellipse representing the fish, and then use the major axis as a predictor of direction of travel. This algorithmic approach was not investigated here due to insufficiency of the data.
The post-processing sequence described here is computationally intensive relative to the fish-per-frame tracked and is not immediately scalable to track all fish in the frame in real-time. As noted earlier in the description of the lighting-camera hardware assembly, it is also limited to the first one or two 'layers' of fish from the camera lens. Whereas this is far from the objective of tracking all fish in in a tank, it may be sufficient as a statistically representative sample of fish behavior, if the camera position accounts for fish hierarchy and fish health (for example, a camera at the bottom of the tank will not detect mortalities floating at the surface).

IV. RESULTS AND DISCUSSION
In this section, preliminary results of the camera calibration, tracking, and fish velocity estimations are presented. These likewise show the potential of the system, while demonstrating artifacts that need to be addressed before reliability is established. The preliminary results are observational outcomes and some descriptive statistics, as the prototype system and the data to date are insufficient to generate large-scale, detailed quantitative comparisons.

A. CAMERA CALIBRATION
The calibration procedure was performed in both air and in turbid water similar to the actual aquaculture tank, without additional lighting to simulate the worst-case scenario. Fig. 7 shows the size of each pixel in water and air at different distances relative to the calibration chart (Fig.3). This information was then applied to the fishes' velocity estimation to correct for their relative velocity from the camera. Fig. 7 demonstrates that the calibration chart was not clear at distances further than 30 cm in water, translating into the camera's inability to recognize objects below 5 cm in size at distances of more than 30 cm in the relatively turbid tank conditions. In this aquaculture operation, this was not necessarily a limiting factor, as all fish were between 5 cm to 45 cm in length. However, this is a consideration for other applications, which would require multiple cameras in multiple positions for a full three-dimensional view.

B. TRACKING AND ESTIMATING FISH VELOCITY AND SIZE
The objective of the study was to explore the ability to track fish to observe their swim behaviors and to create a distribution of fish speeds in a frame. Since the time difference between frames is small (33.3 ms), it was assumed that the position of a fish swimming is not changing considerably from frame to frame. Therefore, the planar velocity of each fish is the simple division of the displacement by the time interval between two successive frames. A known limitation is that this approach assumes that a fish's position is not changing in a direction that the camera cannot detect, which again reinforces the need of multiple cameras from multiple positions to create three-dimensional images in the tank for a full-scale installation.
MATLAB stores most images as arrays. In twodimensional matrices, each element of the matrix corresponds to a single discrete pixel in the displayed image. Three dimensional arrays are used to separate the R(ed), G(reen), and B(lue) pixel intensities of each image, respectively. Further, pixel indices are a convenient way to express locations in an image. The image is treated as an x-y grid of discrete elements with (0,0) origin at the top left of the figure. This representation is used in Fig. 8, 9, and 10. In Fig. 8, 9, and 10, the location of the centroid over successive video frames (translated into arrays in MATLAB) are shown.
To calculate a fish's displacement, the Euclidean distance of the first and the subsequent position of the center of mass of a given fish was calculated. From the calibration results, the actual magnitude of the calculated distance was found and correlated to the distance of the fish from the camera, assuming a constant size of fish in a given tank. Fig. 8 demonstrates two snapshots of the displacement of a fish between 15 consecutive frames. A limitation arises when two or more fish pass over one another, temporarily obscuring the fish that is farther from the camera. This is a very common occurrence, eliminating the ability to track that fish for a given period of time. To compensate, missing data were filled in through simple interpolation; however, cubic spline interpolation may offer more realism in future analysis, and the inclusion of multiple cameras may reduce the problem to insignificance. Fig. 9 illustrates an example of interpolation of missing data.
The path interpolation shown in Fig. 9 is preceded by an algorithmic judgment: when a mask and its corresponding centroid disappear and subsequently a mask and its corresponding centroid reappear, the combination of time and position is evaluated to determine whether it can reasonably be assumed to belong to the same fish and thus interpolate  the path for the missing data. This is again based on knowing that each frame represents 33.3 ms of elapsed time and setting 500 pixels as the maximum allowable distance between the centroids in two successive frames for a given fish.
The tracking algorithm was able to track more than one fish at a time and determine fish trajectories while fish were in the camera's field of view and their mask met the size and shape criteria. Assessments on 1500 frames demonstrated that the algorithm can track up to six fishes at a time in each frame in terms of swimming pattern and estimated velocity at a specific distance from the camera.
Stationary objects are automatically eliminated in the velocity histogram. If an object is almost motionless through several successive frames, it might be classified as debris or a fish mortality (if the camera is positioned near the surface). However, the video data also demonstrated that occasionally, a healthy and otherwise active fish will stop directly in front of the camera and obscure the field-of-view until it decides to move along. As such, lack of movement is not necessarily a robust predictor of a non-fish object or a fish mortality. Further, if a fish blocks the field of view for longer than a few seconds, simple interpolation is insufficient to estimate missing data. Fig. 10 shows, in an x-y map, the trajectories of 10 different objects in 80 frames (approximately 2.7 s). In this sample, 2, 6, 7, and 9 remained relatively stationary through 80 frames. In contrast, objects 1, 3, 4, 5, and 10 had similar swimming curvature patterns.
In addition to fishes' trajectories, an objective of the study was to evaluate the ability to determine fishes' swim velocities. Based on the calibration curve (shown earlier as Fig. 7), the E D is calculated. Together with the knowledge of the time interval between frames (33.3 ms), the velocity can be estimated. Fig. 11 demonstrates the 100th, 200th and 400th frames and the velocity histogram of the fishes from the beginning of the recording to the mentioned frame. The histograms provide an initial understanding of the distribution of velocities in the tank. A smoothed curve implies a fairly Gaussian distribution of velocities with a constant average over time and small standard deviation, whereas the histogram data imply a potential bimodal distribution at times. Further data and their analysis would be needed to verify these initial observations.
Other authors have likewise undertaken fish identification and tracking using video systems [22]- [24]. Even the most conceptually similar studies relative to the overall objective of this present work are notably variable in the species of fish considered, fish housing conditions, combinations of video and lighting conditions used, and algorithmic approaches applied. Each of these factors has a significant impact and makes direct comparison to the present work challenging. For example, the fish considered in these studies varied from 2-3 cm in size [23] to a consideration of other animals in addition to fish [24]. One study used a distinctive dark stripe on the side of the fish as the identifying feature for algorithmic analysis [13] while another searched for the fish head [22]. Conditions ranged from in situ detection to indoor aquaculture operations, with additional variations in tank size and stocking density.
The range of algorithmic approaches proposed (with varying degrees of success) included covariance algorithms [14], cost function calculations followed by global optimization [22], Kalman filtering and feature matching [23], and identifying animals using intensity histograms and Hu moments of detected animals [24]. However, all were on populations qualitatively different than the present study.
Direct comparisons of technology choices in lighting, video, and processing are particularly challenging in light of the rapid evolution of camera technology and data processing options. While the prototype system used in this study was developed for under $1000 as a mobile installation that could be used in any one tank at a time, the cost ceiling is several orders of magnitude higher when considering variables such as the size of the aquaculture operation, desired degree of coverage, desired sophistication of camera technology, data feed and processing needs, and mobile vs. permanent installations (where the latter may require alterations to other components -e.g. tanks -of the overall aquaculture operation). These factors make direct comparisons challenging and potentially misleading.

V. CONCLUSION
In this study, a prototype hardware / software system was developed and tested in an indoor aquaculture setting, specifically with Arctic Char. Algorithms were developed to detect fish in the tanks and to estimate fishes' size, trajectories, and swim velocities. Video was recorded in one to five-minute segments under a combination of lighting conditions and camera positions. Video processing algorithms were developed to track and extract information from individual video frames. Development choices with respect to lighting locations and colors as well as camera positions were outlined.
Overall, the work demonstrated a qualitative assessment of the prototype system through observational outcomes and some descriptive statistics. Though our preliminary results demonstrated that fish tracking and estimation of size, swim orientation, velocity, and other behaviors are easy to conceptualize, we found that performing these tasks was difficult in practice. The combination of water turbidity, stocking density, and the priority to keep environmental conditions (e.g. lighting) conducive to fish wellbeing create challenges to capturing video of high enough quality to extract significant features. Multiple cameras and cameras in dry locations are considered next steps to enhance video and to provide more options for camera power and data storage and transfer. At the same time, multiple cameras will multiply the amount of data to be processed and require the development of algorithms to synchronize frames into one representation. The proposed algorithms for image calibration and estimation of fish size and velocity show promise, while also pointing to the further work needed to be meaningfully applied to a statistically significant sample of fish in real time or near-real time.
Overall, the prototype system to monitor fish in-situ is simple, low-cost and easy to implement. A commercially viable system will require additional hardware considerations in the form of industrially-hardened lighting and video capture systems for long-term and underwater use (or viable out-ofwater alternatives). A commercially viable system would also require additional algorithm development for real-time data processing on a larger scale.
The results of this study offer insights into others on the opportunities and challenges associated with real-time monitoring in aquaculture facilities, in order to ultimately detect potential problems in real-time and/or remotely to pre-empt negative conditions and maximize fish health.