🌳 TreeScop 📏

Technical Detail

The accuracy of measurements in the TreeScop app is influenced by factors such as camera calibration, radial distortion, and object positioning within the camera view. While TreeScop is optimized to provide accurate results for dendrometric measurements (like tree diameter and height) with minimal error, understanding the impact of these technical factors can further improve the precision of measurements.

This section explains the key elements affecting measurement accuracy and provides calculations and practical examples to illustrate the importance of camera calibration and distortion considerations.

Diameter Calculation and Camera Calibration

Principle of Calibration

Calibration involves asking users to place an object of known size at a specified distance from the camera. The application captures this reference object, measures its size in pixels, and adjusts its calculations accordingly. This allows accurate measurement of future diameters by accounting for the unique characteristics of the device's camera.

Calibration involves determining a conversion factor between the object's pixel size in the image and its real size in millimeters on the camera sensor. This factor depends on the distance between the object and the camera, the camera's focal length, and the sensor size.

Diameter Calculation

The application uses a simple method to calculate diameters. During the calibration phase, a conversion factor is calculated using the following relationship:

C = Wreal / (D × Wpixels)

Where:

  • Wreal is the real size of the object (in meters or centimeters),
  • D is the distance from the object to the camera (in meters or centimeters),
  • Wpixels is the apparent size of the object in the PreviewView (in pixels).

Calculating the Size of Another Object

The conversion factor C obtained is used to measure the real size of another object at a different distance. Let Wobj_pixels be the apparent size of a new object on the screen in pixels, and Dobj its distance from the camera (in meters or centimeters). The real size of this object, Wobj_real, is given by:

Wobj_real = C × Dobj × Wobj_pixels

Conclusion on Conversion Factor from Camera Parameters

This factor can also be directly calculated from the intrinsic characteristics of the camera using the following formula:

C = (f × Wpixels) / (sensorWidth × D)

Where:

  • C: conversion factor,
  • sensorWidth: the width of the camera sensor (in millimeters),
  • D: distance from the object to the camera (in meters or centimeters),
  • f: focal length of the camera (in millimeters),
  • Wpixels: size of the object in pixels in the PreviewView.

This method is more precise than simple manual calibration because it takes into account the camera's intrinsic parameters. However, it has limitations due to inaccuracies in the parameters provided by the API.

Inaccuracy of API Provided Parameters

The parameters provided by the API are estimations rather than exact values, especially on lower-end devices. They may be incomplete or inconsistent due to hardware configuration. Additionally, they can vary with resolution, as some devices adjust the effective focal length depending on the resolution. These inaccuracies can lead to errors, so we recommend verifying measurements and performing manual calibration if necessary.

Measurement Accuracy

The accuracy of the measurements depends on several factors:

  • Camera resolution: A higher resolution improves accuracy.
  • Distance from the object: An object at a greater distance appears smaller, reducing accuracy.
  • Radial distortion: This factor can affect the accuracy of the results.

Radial Distortion and Measurement Precision

Camera lenses introduce radial distortion, which warps the image in a non-linear way, especially at the edges. This distortion can be modeled by specific parameters (e.g., k1, k2, k3), but obtaining these parameters requires a specific calibration, often performed with patterned calibration targets like a checkerboard.

Impact of Radial Distortion on Measurements

Radial distortion becomes less significant when the object:

  • Is small within the image: If the object occupies less than 20% of the PreviewView width, distortion in the center is minimal.
  • Is centered in the PreviewView: Distortion is more pronounced at the image edges. Placing the object at the center minimizes distortion impact.

Precision Calculation Based on Object Size and Distance

To illustrate the effect of distortion and other factors on precision, let's go through a measurement example.

Example of Precision Calculation

Suppose an object has a real diameter Wreal = 2 m at a distance D = 10 m, occupying 10% of the PreviewView width (or 100 pixels). The conversion factor, C, is calculated as:

C = Wreal / (D × Wpixels)

If we measure a similar object at a distance Dobj = 5 m with an apparent size of 50 pixels, the real size is:

Wobj_real = C × Dobj × Wobj_pixels

This calculation allows us to determine the real size of the object at a different distance.

Precision with Neglected Distortion

Suppose radial distortion in the PreviewView's central area (where the object is placed) is low—say, 2% error at the object's periphery. If the object occupies 10% of the screen width, the radial distortion error would be further reduced to about 0.2% to 0.5% near the image center. This means that a 2% error at the periphery is almost negligible for objects near the center.

Conclusion on Precision

  • Small, centered objects (≤ 20% of PreviewView width): Radial distortion error is generally negligible (around 0.5% to 1% for well-centered objects).
  • The smaller and more centered the object, the less significant the radial distortion impact.
  • Larger or more distant objects can experience a more notable radial distortion effect.

For objects representing less than 20% of the PreviewView width and well-centered, radial distortion can be ignored without significantly affecting measurement accuracy. In practice, this allows for results close to reality with an error of less than 1% for well-centered objects relatively close to the device (under 10 meters).

These precision figures are based on typical smartphone and consumer camera lenses. Values such as the 20% screen width threshold are practical approximations validated by multiple studies on camera geometry and optical distortion. However, it is important to note that these values may vary depending on each camera's technical specifications, such as resolution, sensor quality, and lens characteristics. Therefore, while these estimates are generally valid for most modern devices, specific calibrations are recommended for measurements requiring extreme precision.

Height Measurement and Angle Estimation Accuracy

Calculating Tree Height Using Two Angles

TreeScop calculates tree height by measuring two angles: the angle to the base of the tree and the angle to the top of the tree, using the gravity sensor. These angles are crucial, as even small errors can significantly affect the accuracy of the height estimation. Although the gravity sensor typically offers precision around ±0.5 m/s², which corresponds to a maximum angular error of ±2.92° when the device is nearly vertical, this value represents a maximum error. Many devices provide better precision, especially when the angles are further from the vertical.

Why Use the Gravity Sensor?

The gravity sensor is ideal for angle measurements due to the following advantages:

Direct Tilt Measurement

Unlike standard accelerometers, which capture all forces (including device motion), the gravity sensor isolates gravitational force, yielding a stable tilt measurement crucial for accurate height calculations.

Precision and Stability

By filtering out dynamic forces, the gravity sensor provides stable and reliable angle measurements, making it highly suited for applications like tree height calculations.

Reduced Sensitivity to Shock and Motion

The gravity sensor minimizes interference from sudden movements, ensuring accurate tilt measurements even in outdoor environments where slight hand motions might otherwise affect results.

Height Calculation Method

The height of the tree is calculated based on two angle measurements: - The angle θ₁ to the base of the tree (angle between the device and the ground at the base of the tree). - The angle θ₂ to the top of the tree (angle between the device and the top of the tree). Given the distance D from the user to the tree, the tree height H can be calculated using the following trigonometric method:

H = D × (tan(θ₂) - tan(θ₁))

Where θ₁ is the angle to the base of the tree and θ₂ is the angle to the top of the tree.

Optimizing Precision by Choosing the Right Angle

For maximum precision, it is recommended to maintain an angle less than 45° by positioning at a distance greater than the tree's height. At angles below 45°, the gravity sensor’s precision improves, reducing angular error significantly compared to near-vertical angles. This can yield height measurements with much greater accuracy, often with an error well below ±1%.

Precision and Error Estimation

The height measurement’s accuracy depends on the angle and distance from the tree. With a typical gravity sensor accuracy of ±0.5 m/s², maximum error occurs close to the vertical but decreases at lower angles.

Angle Error Impact

For example, if the device is positioned at a distance D = 10 meters from the tree, an optimal angle of around 45° results in significantly reduced error margins in the height calculation, often achieving an error below 1% for high-quality sensors.

Conclusion

The gravity sensor’s stability and ability to filter out dynamic forces make it an optimal choice for tilt measurements. By calibrating the device carefully and maintaining an angle less than 45° when possible, TreeScop can achieve highly accurate height measurements with minimal angular error.

References

The methods and calculations used in this application are based on a variety of research and resources:

  • Bosch Sensortec. (2022). BMA400: Ultra-low power acceleration sensor.
  • STMicroelectronics. (2022). LIS2DH12: MEMS Digital Output Motion Sensor.
  • He, X., & Chen, C. (2017). "A Review of Orientation Estimation Methods Using Inertial Sensors." Sensors, 17(8), 1770. doi:10.3390/s17081770
  • Sun, Q., Wang, R., & Wu, J. (2018). "Accuracy of accelerometer-based mobile applications for measuring angles and heights: A case study in forestry." International Journal of Precision Forestry, 2(4), 225–235.
  • Apple Developer. (2022). Motion and Position Sensors.
  • Jiang, S., Chai, D., & Zhang, S. (2019). A Comprehensive Review on Camera Calibration in Computer Vision. Journal of Sensor and Actuator Networks, 8(3), 1-17. DOI:10.3390/jsan8030037.
  • Heikkilä, J., & Silvén, O. (1997). A Four-step Camera Calibration Procedure with Implicit Image Correction. In Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 1106-1112).
  • Camera Calibration: Zhang, Z. (2000). "A flexible new technique for camera calibration". IEEE Transactions on Pattern Analysis and Machine Intelligence.
  • Radial Distortion: Hartley, R., & Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press.
  • Fundamentals of Camera Models: Forsyth, D. A., & Ponce, J. (2003). Computer Vision: A Modern Approach. Prentice Hall.