Skip to content
Questions Regarding...
 
Notifications
Clear all

Questions Regarding Surgical Instrument Instance Segmentation

4 Posts
3 Users
1 Reactions
96 Views
(@cekkec)
Posts: 1
Topic starter
 

Hello,

I have a few questions:

  1. Instance distinction for each tool
    In the provided dataset's mask labels, instances of the same tool are mapped with different colors. Should the same tools be distinguished as separate instances during the test phase as well?

  2. Test submission frequency and result feedback
    The Git repository mentions that we can submit only one Docker image. Can we update this image multiple times? Additionally, will we be able to see the results of the model after each update?

Thank you.

 
Posted : 05/09/2024 1:45 pm
(@tobias-ruckert)
Posts: 13
Admin
 

Dear Enki,

Thank you very much for your request.

Regarding your questions:

1. Yes, you are right, in the ground truth data provided, instances of instruments belonging to the same class are coded with different colors, but they only differ in terms of the B-channel. The class of an instrument is defined by the combination of R and G channel, and the different instances of instruments within a category have different B values. This means that if there are two instruments in an image that belong to the same class, both have the same R and G values, but different B values, which only serve to differentiate these instances of the same class. In our ground truth masks of the training- and test data, we have chosen the color encoding for the instruments as specified in our "Data Description and Labeling Instructions" document on the "Data" page of our website, and if there are multiple instances of the same instrument, we have consistently added an offset of +20 to the stated B channel value to differentiate the instances from each other. We would also suggest to use this approach for your outputs for the test data.

2. You are right, exactly one Docker image should be submitted. This image can be updated as often as you like until the deadline on September 15th, 2024. The submitted model will be checked by us on a small part of the training data for correct functionality, i.e., whether the output of your model corresponds to the desired output format. You will then receive feedback shortly aferwards. After the deadline, the model will be evaluated on the test data and the results will be presented at the MICCAI conference in Marrakesh in the EndoVis workshop on October 10th, 2024.

Best regards,

Tobias Rueckert

This post was modified 2 months ago 4 times by Tobias Rückert
 
Posted : 10/09/2024 2:08 pm
(@yumion)
Posts: 3
 

Hello, 

Posted by: @tobias-ruckert

The submitted model will be checked by us on a small part of the training data for correct functionality, i.e., whether the output of your model corresponds to the desired output format. You will then receive feedback shortly aferwards.

 

I have two questions about them.

 

1. How do we get the feedback? by email or notification on [this]( https://phakir-submission.re-mic.de/notifications) web site?

 

2. How long does it take to check the submitted model?

 

Best, 

Atsushi

This post was modified 2 months ago by Atsushi Yamada
 
Posted : 16/09/2024 12:39 pm
(@tobias-ruckert)
Posts: 13
Admin
 

Dear Atsushi,

Thank you very much for your request. Regarding your two questions:

1. The feedback will be provided by email.

2. The submitted containers will be checked manually for correct functionality and to ensure that the output format of the model corresponds to the expected output format specified in our submission template ( https://github.com/schnadoslin/PhaKIR_Submission_Template/ ). It usually takes a few hours from the time of submission to notification during the workday, at night it can be a little longer, based on the time of submission according to Central European Time (CET).

Best regards,

Tobias

 
Posted : 16/09/2024 1:30 pm