@paula-derrenger

Understanding metrics like MAP, Precision, and F1 Score is key to optimizing YOLOv8—have any questions or insights about these concepts? Drop them below, and our team is here to help clarify and discuss!

@dalinsixtus6752

i need to change the backbone of the YoloV8  given in the cfg file with EfficientNet , does changing the cfg file(.yaml) enough or do i need to change other file (.py), can you name the file , am not too familiar with the code , please help me

@ajarivas72

How can you compare which model is best, testing with the same val images?

best_a.pt or best_b.pt?

@atolagbejoshua1842

Can you please explain why the mIoU is not displayed in the validation>?

@SadeoudGdsd

Been using Walter Writes AI Humanizer and it actually makes AI writing sound human without getting flagged

@UnknownUkht-np6qy

Can you please tell me why the accuracy doesn't display ??

@Raekpruk18

Awesome, I'm looking forward to the new lessons!

@abdellatifBELMADY

Great job 👏

@sylraht

Hi, I'm working with YOLO and during validation I get high performance metrics, which visually align with what I observe in the predictions. However, the confusion matrix generated automatically doesn't seem to reflect this — it shows values that suggest perfect detection of the target class even when the ground truth is background, and overall it looks inconsistent with the model’s actual behavior. Since everything else appears to be working properly, I suspect the issue may be related to how the confusion matrix is being calculated or normalized internally. Any help or clarification would be greatly appreciated.

@mardeenosman8979

what about performance metrics for segmentation?

@W0lfbaneShikaisc00l

This video doesn't properly explain what the values in the confusion matrix represent, I'm having to refer to research papers because no-one can be bothered to explain what are the true/false positives and true/false negatives in the matrix: maybe you could explain this properly so that a student may understand what your confusion matrix is producing and how this can be used to calculate precision, recall, F1 and map as I fail to see how one of your developers tells someone in a forum: oh it works differently in yolov8 and then just telling us to refer to the documentation (without telling us which part of the documentation mentions this) is going to help someone that is new to yolov8 and new to the whole concept of matrix tables by just glossing over the important parts and summarising "what they visualise".

Your documentation needs better explanations as I can see from this video alone that the documentation doesn't go into much detail. It would be helpful for someone who learning AI to know this as it seems vital to understand what your confusion matrix is telling you. Sadly I'm having to piece things together to make sense of it all as even your forum developer confuses me by saying "these are results not included in the validation dataset" and when I try to make sense of two seemingly different confusion matrix tables this compounds the confusion (pardon the pun).

Sadly, when I look at the ultralytics documentation it can seem to go into information that is either too generalized or too wordy to make sense of. I often have to find better sources from academics that explain things in a more sensible manner. I feel this is something that could be improved on if someone were to revise this? Not that I don't appreciate the effort that goes into making these, but it's rather a headache to go through pages of documentation to find a small relevant section that may not going into enough detail.

@dianafonseca6721

What does the B mean in the results.png?

@trunghaquoc7329

Hi, your video is very helpful. I have a question: how to get the value of confusion matrix when using the evaluation function of YOLOv8 classification model. Looking forward to your answer!

@caiohenriquemanganeli9806

Thank you for this awsome video! Is it possible to plot the mAP50~95 vs epochs in order to assess under/overfitting from Ultralytics?

@nullvoid7543

I expected it to be more in-depth.