nnspike.utils.control

Computer vision control utilities for robot navigation and object detection.

This module provides computer vision algorithms for robot control applications, including line following, object detection, and target recognition. The functions process camera images to extract navigation information and detect specific objects for autonomous robot control systems.

The module supports various detection tasks: - Line edge detection for path following - Colored object detection for navigation markers - Bullseye target detection for precision tasks - Gate detection using virtual line calculation - Attitude angle calculation for steering control

Functions:
find_line_edges_at_y(image: np.ndarray, roi: tuple[int, int, int, int], target_y: float, threshold_value: float = 50) -> tuple[float | None, float | None]:

Get the left and right edge points of a black line at a specific Y coordinate.

find_bottle_center(image: np.ndarray, color: str, min_area: int = 500) -> tuple[tuple[float, float] | None, np.ndarray | None, float | None]:

Find the center coordinates and color pixel count of a colored object in an image using OpenCV.

find_bullseye(image: np.ndarray, threshold: float = 120) -> tuple[tuple[float, float] | None, np.ndarray | None, float | None]:

Find the center coordinates of a blue bullseye target in an image.

find_gate_virtual_line(image: np.ndarray, scan_x: int = 320, from_y: int = 0, to_y: int = 480) -> tuple[tuple[float, float] | None, np.ndarray | None, float | None]:

Find the virtual line for gate detection based on gray color regions.

calculate_attitude_angle(offset_pixels: float, roi_bottom_y: int, camera_height: float = 0.20, focal_length_pixels: float = 640) -> float:

Calculate attitude angle (theta) from pixel offset using camera geometry.

Note

All functions expect BGR format numpy arrays as input images. The module is optimized for real-time applications with efficient contour detection and morphological operations.

Functions

calculate_attitude_angle(offset_pixels, ...)

Calculate attitude angle (theta) from pixel offset using camera geometry.

find_bottle_center(image, color[, min_area])

Find the center coordinates and color pixel count of a colored object in an image using OpenCV.

find_bullseye(image[, threshold])

Find the center coordinates of a blue bullseye target in an image.

find_gate_virtual_line(image[, scan_x, ...])

Find the virtual line for gate detection based on gray color regions.

find_line_edges_at_y(image, roi, target_y[, ...])

Get the left and right edge points of a black line at a specific Y coordinate.

nnspike.utils.control.find_line_edges_at_y(image, roi, target_y, threshold_value=50)[source]

Get the left and right edge points of a black line at a specific Y coordinate.

Parameters:
  • image (np.ndarray) – Input image (BGR or grayscale).

  • roi (tuple[int, int, int, int]) – Tuple (x, y, width, height) defining the ROI.

  • target_y (float) – The Y coordinate where to detect line edges (in original image coordinates).

  • threshold_value (float, optional) – Threshold for binary conversion. Defaults to 50.

Returns:

Tuple containing:
  • left_x: X coordinate of left edge (None if not found)

  • right_x: X coordinate of right edge (None if not found)

Return type:

tuple[float | None, float | None]

nnspike.utils.control.find_bottle_center(image, color, min_area=500)[source]

Find the center coordinates and color pixel count of a colored object in an image using OpenCV.

This function detects objects of a specified color in an image and returns information about the largest detected object. It supports yellow, blue, and red color detection and can be used for various applications including object tracking, color-based navigation, and visual recognition.

This function is optimized for real-time applications with the following improvements: - Accepts numpy array input instead of file paths for real-time processing - Uses adaptive thresholding for better edge detection under various lighting conditions - Applies contour area filtering to reduce noise and false detections - Includes aspect ratio validation to filter out non-object-like shapes - Uses smaller morphological kernels for better performance - Removes debug print statements for cleaner real-time operation

Parameters:
  • image (np.ndarray) – Input image as numpy array (BGR format).

  • color (str) – Color to detect (‘yellow’, ‘blue’, or ‘red’).

  • min_area (int, optional) – Minimum contour area threshold for filtering noise. Defaults to 500.

Returns:

Tuple containing:
  • center: (x, y) center coordinates of the largest detected object

  • largest_contour: Contour of the largest detected object

  • color_pixel_count: Number of detected color pixels

Returns (None, None, 0) if not found.

Return type:

tuple[tuple[float, float] | None, np.ndarray | None, float | None]

Raises:

ValueError – If color parameter is not ‘yellow’, ‘blue’, or ‘red’.

nnspike.utils.control.find_bullseye(image, threshold=120)[source]

Find the center coordinates of a blue bullseye target in an image.

Uses blue color masking followed by circular shape detection for efficient bullseye detection. Prioritizes detections within a specified distance from the image center (x=320).

Parameters:
  • image (np.ndarray) – Input image as numpy array (BGR format).

  • threshold (float, optional) – Maximum allowed distance from image center (x=320). Bullseyes within range [320-threshold, 320+threshold] are prioritized. Defaults to 120.

Returns:

Tuple containing:
  • center: (x, y) center coordinates of the detected bullseye

  • contour: Detected bullseye contour

  • blue_pixel_count: Number of blue pixels

Returns (None, None, None) if not found.

Return type:

tuple[tuple[float, float] | None, np.ndarray | None, float | None]

nnspike.utils.control.find_gate_virtual_line(image, scan_x=320, from_y=0, to_y=480)[source]

Find the virtual line for gate detection based on gray color regions.

This function detects gray regions in an image and calculates the center point between the leftmost and rightmost gray pixels from a given scan position.

Parameters:
  • image (np.ndarray) – Input BGR image as numpy array.

  • scan_x (int, optional) – X-coordinate to start scanning from. Defaults to 320.

  • from_y (int, optional) – Starting Y-coordinate for scanning. Defaults to 0.

  • to_y (int, optional) – Ending Y-coordinate for scanning. Defaults to 480.

Returns:

Tuple containing:
  • virtual_line_coords: Virtual line coordinates (x, y) or None if no gate found

  • gray_mask: The processed gray mask for debugging

  • width: Width between left and right borders or None

Return type:

tuple[tuple[float, float] | None, np.ndarray | None, float | None]

nnspike.utils.control.calculate_attitude_angle(offset_pixels, roi_bottom_y, camera_height=0.2, focal_length_pixels=640)[source]

Calculate attitude angle (theta) from pixel offset using camera geometry.

This function converts the pixel-based offset detected in the camera image to a real-world attitude angle that represents the robot’s deviation from the desired path. This provides more physically meaningful control compared to simple pixel-based normalization.

Parameters:
  • offset_pixels (float) – Lateral offset in pixels from image center

  • roi_bottom_y (int) – Bottom y-coordinate of ROI (closer to robot)

  • camera_height (float, optional) – Camera height above ground in meters. Defaults to 0.20.

  • focal_length_pixels (float, optional) – Camera focal length in pixels. Defaults to 640.

Returns:

Attitude angle (theta) in radians. Positive values indicate rightward deviation,

negative values indicate leftward deviation.

Return type:

float

Note

The camera parameters (height and focal length) should be calibrated for your specific robot setup to ensure accurate angle calculations.