Introduction
Human-centric perceptual modeling for point clouds: JND Prediction Point Cloud Compression PKU-JND
Point clouds are a fundamental 3D representation widely used in immersive media, autonomous driving, digital twins, virtual/augmented reality, and telepresence. Compression under strict bandwidth, storage, and compute constraints inevitably introduces perceptual quality degradation.
Just Noticeable Difference (JND) characterizes the minimum distortion level that becomes perceptible to human observers. Accurate JND modeling supports perceptually optimized compression by removing imperceptible redundancies while preserving visually critical information.
This challenge introduces PKU-JND, a large-scale benchmark dataset dedicated to JND measurement for point cloud compression and encourages robust JND prediction methods under a standardized evaluation protocol.
Dataset: PKU-JND
- 230 reference point cloud models across 5 content categories
- Distortions generated using MPEG G-PCC
- 5,750 geometry-distorted samples; 7,130 attribute-distorted samples
- Subjective experiments (two phases, 20–31 participants) with rigorous post-processing
Dataset download links will be released with the official challenge announcement.
Tasks
- Geometry JND prediction
- Attribute (color) JND prediction
- Unified perceptual JND modeling
Evaluation
Metrics are computed between subjective JND scores and predicted scores on the held-out test set.
Let the subjective JND score be $s_i$ and the predicted score be $\hat{s}_i$, for $i=1,\dots,N$.
(1) PLCC
(2) SRCC
where $d_i$ is the difference between the ranks of $s_i$ and $\hat{s}_i$.
(3) KRCC
where $C$ and $D$ denote the numbers of concordant and discordant pairs.
(4) RMSE
(5) Final Ranking Score
Final rankings are determined by Score. PLCC, SRCC, KRCC, and RMSE are also reported.
Reproducibility & Fairness: Only the official training data may be used. Any external data usage (pretraining, fine-tuning, distillation, or model selection) is strictly prohibited. An official evaluation script will be released.
Submission
Submit predictions for the official test set following the provided JSON format. Each team may submit up to three results per day.
The official competition platform and submission instructions will be announced.
Dates
All deadlines are at 11:59 PM UTC unless otherwise noted.
| Event | Date (UTC) |
|---|---|
| Registration Open | 2026-02-15 |
| Training Data Release | 2026-03-10 |
| Challenge Result Submission Deadline | 2026-04-25 |
| Challenge Technical Paper Submission Deadline | 2026-05-10 |
| Final Decisions | 2026-05-15 |
| Camera Ready Submission Deadline | 2026-05-25 |
Organizers
- Liang Xie — Guangdong University of Technology (LXie5201@outlook.com)
- Wei Gao — Peking University, Shenzhen (gaowei262@pku.edu.cn)
- Ge Li — Peking University, Shenzhen (geli@pku.edu.cn)
- Yanting Li — Guangdong University of Technology (1209847234@qq.com)