libreyolo.LIBREYOLO11¶
- class libreyolo.LIBREYOLO11[source]¶
Bases:
objectLibre YOLO11 model for object detection.
- Parameters:
model_path – Path to model weights file (required)
size – Model size variant (required). Must be one of: “n”, “s”, “m”, “l”, “x”
reg_max – Regression max value for DFL (default: 16)
nb_classes – Number of classes (default: 80 for COCO)
save_feature_maps – Feature map saving mode. Options: - False: Disabled (default) - True: Save all layers - List of layer names: Save only specified layers (e.g., [“backbone_p1”, “neck_c2f21”])
save_eigen_cam – If True, saves EigenCAM heatmap visualizations on each inference (default: False)
cam_method – CAM method for explain(). Options: “eigencam”, “gradcam”, “gradcam++”, “xgradcam”, “hirescam”, “layercam”, “eigengradcam” (default: “eigencam”)
cam_layer – Target layer for CAM computation (default: “neck_c2f22”)
device – Device for inference. “auto” (default) uses CUDA if available, else MPS, else CPU. Can also specify directly: “cuda”, “cuda:0”, “mps”, “cpu”.
tiling – Enable tiling for processing large/high-resolution images (default: False). When enabled, large images are automatically split into overlapping 640x640 tiles, inference is run on each tile, and results are merged using NMS.
Example
>>> model = LIBREYOLO11(model_path="path/to/weights.pt", size="x", save_feature_maps=True) >>> detections = model(image=image_path, save=True) >>> # Use explain() for XAI heatmaps >>> heatmap = model.explain("image.jpg", method="gradcam")
- __init__(model_path, size, reg_max=16, nb_classes=80, save_feature_maps=False, save_eigen_cam=False, cam_method='eigencam', cam_layer=None, device='auto', tiling=False)[source]¶
Initialize the Libre YOLO11 model.
- Parameters:
model_path (str | dict) – Path to user-provided model weights file or loaded state dict
size (str) – Model size variant. Must be “n”, “s”, “m”, “l”, or “x”
reg_max (int) – Regression max value for DFL (default: 16)
nb_classes (int) – Number of classes (default: 80)
save_feature_maps (bool | List[str]) – Feature map saving mode. Options: - False: Disabled - True: Save all layers - List[str]: Save only specified layer names
save_eigen_cam (bool) – If True, saves EigenCAM heatmap visualizations
cam_method (str) – Default CAM method for explain() (default: “eigencam”)
cam_layer (str | None) – Target layer for CAM computation (default: “neck_c2f22”)
device (str) – Device for inference (“auto”, “cuda”, “mps”, “cpu”)
tiling (bool) – Enable tiling for large images (default: False). When enabled, images larger than 640x640 are split into overlapping tiles for inference.
Methods
__init__(model_path, size[, reg_max, ...])Initialize the Libre YOLO11 model.
explain(image[, method, target_layer, ...])Generate explainability heatmap for the given image using CAM methods.
export([output_path, input_size, opset])Export the model to ONNX format.
Get list of available CAM methods.
Get list of available layer names for feature map saving.
predict(image[, save, output_path, ...])Alias for __call__ method.
- __init__(model_path, size, reg_max=16, nb_classes=80, save_feature_maps=False, save_eigen_cam=False, cam_method='eigencam', cam_layer=None, device='auto', tiling=False)[source]¶
Initialize the Libre YOLO11 model.
- Parameters:
model_path (str | dict) – Path to user-provided model weights file or loaded state dict
size (str) – Model size variant. Must be “n”, “s”, “m”, “l”, or “x”
reg_max (int) – Regression max value for DFL (default: 16)
nb_classes (int) – Number of classes (default: 80)
save_feature_maps (bool | List[str]) – Feature map saving mode. Options: - False: Disabled - True: Save all layers - List[str]: Save only specified layer names
save_eigen_cam (bool) – If True, saves EigenCAM heatmap visualizations
cam_method (str) – Default CAM method for explain() (default: “eigencam”)
cam_layer (str | None) – Target layer for CAM computation (default: “neck_c2f22”)
device (str) – Device for inference (“auto”, “cuda”, “mps”, “cpu”)
tiling (bool) – Enable tiling for large images (default: False). When enabled, images larger than 640x640 are split into overlapping tiles for inference.
- __call__(image, save=False, output_path=None, conf_thres=0.25, iou_thres=0.45, color_format='auto', batch_size=1)[source]¶
Run inference on an image or directory of images.
- Parameters:
image (str | Path | Image | ndarray | Tensor | bytes | BytesIO) – Input image or directory. Supported types: - str: Local file path, directory path, or URL (http/https/s3/gs) - pathlib.Path: Local file path or directory path - PIL.Image: PIL Image object - np.ndarray: NumPy array (HWC or CHW, RGB or BGR) - torch.Tensor: PyTorch tensor (CHW or NCHW) - bytes: Raw image bytes - io.BytesIO: BytesIO object containing image data
save (bool) – If True, saves the image with detections drawn. Defaults to False.
output_path (str) – Optional path to save the annotated image. If not provided, saves to ‘runs/detections/’ with a timestamped name.
conf_thres (float) – Confidence threshold (default: 0.25)
iou_thres (float) – IoU threshold for NMS (default: 0.45)
color_format (str) – Color format hint for NumPy/OpenCV arrays. - “auto”: Auto-detect (default) - “rgb”: Input is RGB format - “bgr”: Input is BGR format (e.g., OpenCV)
batch_size (int) – Number of images to process per batch when handling multiple images (e.g., directories). Currently used for chunking at the Python level; true batched model inference is planned for future versions. Default: 1 (process one image at a time).
- Returns:
- Dictionary containing detection results with keys:
boxes: List of bounding boxes in xyxy format
scores: List of confidence scores
classes: List of class IDs
num_detections: Number of detections
source: Source image path (if available)
saved_path: Path to saved image (if save=True)
For directory: List of dictionaries, one per image processed.
- Return type:
For single image
- predict(image, save=False, output_path=None, conf_thres=0.25, iou_thres=0.45, color_format='auto', batch_size=1)[source]¶
Alias for __call__ method.
- Parameters:
image (str | Path | Image | ndarray | Tensor | bytes | BytesIO) – Input image or directory. Supported types: - str: Local file path, directory path, or URL (http/https/s3/gs) - pathlib.Path: Local file path or directory path - PIL.Image: PIL Image object - np.ndarray: NumPy array (HWC or CHW, RGB or BGR) - torch.Tensor: PyTorch tensor (CHW or NCHW) - bytes: Raw image bytes - io.BytesIO: BytesIO object containing image data
save (bool) – If True, saves the image with detections drawn. Defaults to False.
output_path (str) – Optional path to save the annotated image.
conf_thres (float) – Confidence threshold (default: 0.25)
iou_thres (float) – IoU threshold for NMS (default: 0.45)
color_format (str) – Color format hint for NumPy/OpenCV arrays (“auto”, “rgb”, “bgr”)
batch_size (int) – Number of images to process per batch when handling multiple images (e.g., directories). Default: 1.
- Returns:
Dictionary containing detection results. For directory: List of dictionaries, one per image processed.
- Return type:
For single image
- explain(image, method=None, target_layer=None, eigen_smooth=False, save=False, output_path=None, alpha=0.5, color_format='auto')[source]¶
Generate explainability heatmap for the given image using CAM methods.
This method provides visual explanations of what the model focuses on when making predictions. It supports multiple CAM (Class Activation Mapping) techniques including gradient-based and gradient-free methods.
- Parameters:
image (str | Path | Image | ndarray | Tensor | bytes | BytesIO) – Input image. Supported types: - str: Local file path or URL (http/https/s3/gs) - pathlib.Path: Local file path - PIL.Image: PIL Image object - np.ndarray: NumPy array (HWC or CHW, RGB or BGR) - torch.Tensor: PyTorch tensor (CHW or NCHW) - bytes: Raw image bytes - io.BytesIO: BytesIO object containing image data
method (str | None) – CAM method to use. Options: - “eigencam”: Gradient-free, SVD-based (default) - “gradcam”: Gradient-weighted class activation - “gradcam++”: Improved GradCAM with second-order gradients - “xgradcam”: Axiom-based GradCAM - “hirescam”: High-resolution CAM - “layercam”: Layer-wise CAM - “eigengradcam”: Eigen-based gradient CAM
target_layer (str | None) – Layer name for CAM computation. Use get_available_layer_names() to see options. Defaults to “neck_c2f22”.
eigen_smooth (bool) – Apply SVD smoothing to the heatmap (default: False).
save (bool) – If True, saves the heatmap visualization to disk.
output_path (str | None) – Optional path to save the visualization.
alpha (float) – Blending factor for overlay (default: 0.5).
color_format (str) – Color format hint for NumPy/OpenCV arrays (“auto”, “rgb”, “bgr”).
- Returns:
heatmap: Grayscale heatmap array of shape (H, W) with values in [0, 1]
overlay: RGB overlay image as numpy array
original_image: Original image as PIL Image
method: CAM method used
target_layer: Target layer used
saved_path: Path to saved visualization (if save=True)
- Return type:
Dictionary containing
Example
>>> model = LIBREYOLO11("yolo11n.pt", size="n") >>> result = model.explain("image.jpg", method="gradcam", save=True) >>> heatmap = result["heatmap"] >>> overlay = result["overlay"]