Rules
All participants interested in contributing to this Challenge are encouraged to carefully review and adhere to the rules outlined below.
Registration Required
All participants must register to join the challenge. After registration,
you'll receive essential credentials and updates.
👉 Register here
Track Your Progress
Submit your interim results through our Results Portal to view your
temporary ranking. A password is required (you will receive it after
registration).
👉 Submit results
Submission Format:
Files must be in COCO format (JSON file following the COCO evaluation
schema).
👉
Sample Submission File
🏆 Final Submission Requirements
Your GitHub repository MUST follow this exact structure:
To ensure consistency and fairness in the evaluation process, top 10 teams have highest performance submission must include a GitHub repository name as TeamName_CADOT_Challenge structured as below, containing the specified components.

-
README.md: Must contain:
- Hardware/software requirements (e.g., CUDA version, GPU memory).
- Step-by-step execution commands with example parameters.
- Validation scores achieved during development.
- Dependency installation instructions (preferably using virtual environment).
-
requirements.txt: Frozen dependencies with exact
versions (e.g.,
torch==2.1.0
). -
data/: Empty directories with
instructions.md
explaining the directory structure for generated images (if using augmentation). -
models/:
- Pre-trained weights (.pth, .pt, .h5).
- Architecture file with all layer details.
- Config file with readable hyperparameters.
-
scripts/:
- Training script with full reproducibility.
- Inference script producing COCO-format JSON.
- Separate module for data augmentation.
- Evaluation script calculating all metrics.
-
results/:
- Final
predictions.json
for the test set. metrics.csv
with per-class AP/AR metrics.
- Final
-
reports/:
- 5-page PDF report including:
- Model architecture diagram
- Training convergence plots
- Error analysis with confusion matrix
Technical Requirements – Critical Aspects:
- Reproducibility: Must achieve ±1% mAP@50 variation from reported scores when re-run by organizers.
- Annotation Compliance: Predictions.json must use exact COCO-format with official class IDs.
Forbidden Practices:
- Manual annotation of any generated images.
- External data not documented in report.
Criteria
To decide the winner of this Challenge, the organizing committee will consider the following items:
- Performance Metrics: Mean Average Precision (mAP) at IoU 0.5, with separate assessments for all object classes.
- Object Detection Performance: Evaluation will be conducted on both frequent and rare classes to ensure robustness across the dataset.
- Reproducibility: All results must be reproducible using provided scripts and models.
- Class-wide Performance: Balanced performance across all classes is essential.