## DL Sparse View CT challenge

Organized by challenge-organizer - Current server time: May 22, 2022, 1:53 a.m. UTC

#### First phase

Training
March 17, 2021, midnight UTC

#### End

Competition Ends
June 1, 2021, midnight UTC

Thanks to all participants!
The competition is completed, see results for ranking.

Overview:

The American Association of Physicists in Medicine (AAPM) is sponsoring a “Grand Challenge” on deep-learning for image reconstruction leading up to the 2021 AAPM Annual Meeting. The DL-sparse-view CT challenge will provide an opportunity for investigators in CT image reconstruction using data-driven techniques to compete with their colleagues on the accuracy of their methodology for solving the inverse problem associated with sparse-view CT acquisition. A session at the 2021 AAPM Annual Meeting will focus on the DL-sparse-view CT Challenge; an individual from each of the two top-performing teams will receive a waiver of the meeting registration fee in order to present their methods during this session. Following the Annual Meeting, challenge participants from the five top-performing teams will be invited to participate in a challenge report.

Background:

Background information on the DL-sparse-view CT challenge can be found in the article “Do CNNs solve the CT inverse problem?” [1], which spells out the necessary evidence to support the claim that data-driven techniques such as deep-learning with CNNs solve the CT inverse problem. Recent literature [2,3,4] claims that CNNs can solve inverse problems related to medical image reconstruction. In particular, references [2,4] claim that CNNs solve a specific inverse problem that arises in sparse-view X-ray CT. These papers and other related work have gained wide-spread attention and hundreds of papers have followed that build on this approach. Evidence for solving the CT inverse problem can take the form of numerical simulations where a test simulated image can be recovered from its ideal projection (i.e. no noise or other data inconsistencies). In Ref. [1], such experiments were attempted using our best guess at implementing the methodology in Refs. [2,4]. While the CNN results achieved a certain level of accuracy, these results fall short of providing evidence for solving the associated inverse problem.

We do, however, acknowledge that there has been much development in this field over the past few years and it stands to reason that there are likely data-driven approaches superior to the one that we implemented. The purpose of this challenge is to identify the state-of-the-art in solving the CT inverse problem with data-driven techniques. The challenge seeks the data-driven methodology that provides the most accurate reconstruction of sparse-view CT data.

Objective

The overall objective of the DL-sparse-view CT challenge is to determine which deep-learning (or data-driven) technique provides the most accurate recovery of a test phantom from ideal 128-view projection data with no noise. To this end, we will provide 4000 data/image pairs based on a 2D breast CT simulation that are to be used for training the algorithm. How these 4000 pairs are split into training and validation sets is up to the individual participating teams. After the training period is over, testing data will be provided for 100 cases without the corresponding ground truth images. Participants will submit their reconstructed images for these testing cases.

Sources

• [1] E. Y. Sidky, I. Lorente, J. G. Brankov, and X. Pan, “Do CNNs solve the CT inverse problem?”, IEEE Trans. Biomed. Engineering (early access: https://doi.org/10.1109/TBME.2020.3020741), 2020. Also available at: https://arxiv.org/abs/2005.10755
• [2] K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” IEEE Trans. Image Proc., vol. 26, pp. 4509–4522, 2017.
• [3]  B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, pp. 487–492, 2018.
• [4] Y. Han and J. C. Ye, “Framing U-Net via deep convolutional framelets: Application to sparse-view CT,” IEEE Trans. Med. Imag., vol. 37, pp. 1418–1429, 2018.

Results, prizes and publication plan:

An individual from each of the two top-performing teams will receive a waiver of the meeting registration fee in order to present their methods during this session. Following the Annual Meeting, challenge participants from the five top-performing teams will be invited to participate in a challenge report.

At the conclusion of the challenge, the following information will be provided to each participant:

• The evaluation results for the submitted cases
• The overall ranking among the participants

The top 2 participants:

• Will present their algorithm and results at the annual AAPM meeting
• Will receive complimentary registration to the AAPM meeting
• Will receive a Certificate of Merit from AAPM

A manuscript summarizing the challenge results will be submitted for publication following completion of the challenge.

Important Dates

1. Grand challenge website launch: February 28, 2021
2. Release date of training set cases with truth: March 17, 2021
3. Release date of validation set cases: March 31, 2021
4. Release date of test set cases: May 17, 2021
5. Final submission of test results (midnight UTC, 5pm PST on May 31st): June 1, 2021
6. Testing phase leaderboard results posted and top two teams invited to present at challenge symposium : June 14, 2021
7. Grand Challenge Symposium, AAPM 2021 Annual Meeting: July 25-29, 2021
8. Top five teams invited to participate in the challenge report and datasets made public: August 2021

Organizers and Major Contributors:

1. Emil Sidky, The University of Chicago
2. Xiaochuan Pan, The University of Chicago
3. Jovan Brankov, Illinois Institute of Technology
4. Iris Lorente, Illinois Institute of Technology
5. Samuel G. Armato, The University of Chicago
6. Karen Drukker, The University of Chicago
7. Lubomir Hadjiyski, University of Michigan
8. Nicholas Petrick, US-FDA
9. Keyvan Farahani, NIH-National Cancer Institute
10. Reshma Munbodh, Brown Unversity
11. Kenny Cha, US-FDA
12. Jayashree Kalpathy-Cramer, Massachusetts General Hospital
13. Benjamin Bearce, Massachusetts General Hospital
14. AAPM Working Group on Grand challenges

Contact:

For general challenge inquires, please post to the FORUM.

Prepare submission:

1. Create zip file
Training phase:      zip trainingSubmission.zip predictions.npy
Validation phase:   zip validationSubmission.zip predictions.npy
Testing phase:       zip testingSubmission.zip predictions.npy algorithmReport.pdf

In the training and validation phases, predictions.npy is 10x512x512 in float32, containing ten images. In the testing phase predictions.npy is 100x512x512. For the testing phase algorithm report, the acceptable formats are pdf, docx, and doc. Different filenames than those in the examples are acceptable.

2. Submit the zip file
Under the "Participate" tab select "Submit/View Results".
Click on the "Submit" button and upload the prepared zip file.
It should take less than a minute to process the submission, and in the training and validation phases the scoring log can be viewed.
After receiving a score, the score can be submitted to the leaderboard.

In the training phase the submitted images are compared against the first ten training phantom images. A file containing these images is available under the "Participate" tab by selecting "Files" and clicking on the "Starting Kit" button. If you submit these images you should get a perfect RMSE score of 0.0. The purpose of submitting in the training phase is to practice submission and to understand the scoring system. There are unlimited submissions in the training and validation phases for all individuals even if they are on the same team.

In the testing phase, each team will be allowed three submissions. (Failed submissions do not count toward the three submissions)

Quantitative evaluation:

Submitted reconstructed images will be evaluated by computing

• Root-mean-square-error (RMSE) averaged over the 100 test images:

$${1\over100} \sum_{i=1}^{100} \sqrt{\|t_i-r_i\|_2^2 \over n}$$ where r and t represent the reconstructed and test images, respectively, and n is the number of pixels in a single image.
• Worst-case 25x25 pixel ROI-RMSE over all 100 test image

$$\underset{i,c}{max} \sqrt{\|b_c (t_i-r_i)\|_2^2 \over m}$$ where $$b_c$$ is a masking indicator function for the 25x25 ROI centered on coordinates c in the image, m=625 is the number of pixels in the ROI.

Mean RMSE will be the primary metric that determines the algorithm ranking. In case of a numerical tie (to be determined based on the distribution of results with the mean RMSE), the worst-case ROI-RMSE will be used to rank the algorithms.

The above methods evaluate your submissions as follows:

metrics.py


# This program demonstrates the mean RMSE and worst-case ROI RMSE metrics
# The phantom images are taken as the ground truth

import numpy as np, sys, os
import pdb

INPUT = sys.argv[1] # INPUT which has both ./ref and ./res - user submission
OUT = sys.argv[2] # OUTPUT

REFERENCE = os.path.join(INPUT, "ref") # Phantom GT
PREDICTION_OUTPUT = os.path.join(INPUT, "res") # user submission wll be available from here

# Ground Truth
# filtered back-projection reconstruction from the 128-view sinogram
# users will try to recreate these. They serve as the input data.
if len(os.listdir(REFERENCE)) == 1 and os.listdir(REFERENCE)[0][-4:] == ".npy":
phantom_gt_file_name = os.listdir(REFERENCE)[0]
else:
raise Exception('Organizer, either you have more than one file in your ref directory or it doesn\'t end in .npy')

# User Images
# The goal is to train a network that accepts the FBP128 image (and/or the 128-view sinogram)
# to yield an image that is as close as possible to the corresponding Phantom image.
if len(os.listdir(PREDICTION_OUTPUT)) == 1 and os.listdir(PREDICTION_OUTPUT)[0][-4:] == ".npy":
prediction_file_name = os.listdir(PREDICTION_OUTPUT)[0]
else:
raise Exception('You either have more than one file in your submission or it doesn\'t end in .npy')

# get the number of prediction_phantoms and number of pixels in x and y
nim, nx, ny = prediction_phantoms.shape

# mean RMSE computation
diffsquared = (phantom_gt-prediction_phantoms)**2
num_pix = float(nx*ny)

meanrmse  = np.sqrt( ((diffsquared/num_pix).sum(axis=2)).sum(axis=1) ).mean()
print("The mean RSME over %3i images is %8.6f "%(nim,meanrmse))

# worst-case ROI RMSE computation
roisize = 25  #width and height of test ROI in pixels
x0 = 0        #tracks x-coordinate for the worst-case ROI
y0 = 0        #tracks x-coordinate for the worst-case ROI
im0 = 0       #tracks image index for the worst-case ROI

maxerr = -1.
for i in range(nim): # For each image
print("Searching image %3i"%(i))
phantom = phantom_gt[i].copy() # GT
prediction =  prediction_phantoms[i].copy() # Pred
# These for loops cross every pixel in image (from region of interest)
for ix in range(nx-roisize):
for iy in range(ny-roisize):
roiGT =  phantom[ix:ix+roisize,iy:iy+roisize].copy() # GT
roiPred =  prediction[ix:ix+roisize,iy:iy+roisize].copy() # Pred
if roiGT.max()>0.01: #Don't search ROIs in regions where the truth image is zero
roirmse = np.sqrt( (((roiGT-roiPred)**2)/float(roisize**2)).sum() )
if roirmse>maxerr:
maxerr = roirmse
x0 = ix
y0 = iy
im0 = i
print("Worst-case ROI RMSE is %8.6f"%(maxerr))
print("Worst-case ROI location is (%3i,%3i) in image number %3i "%(x0,y0,im0+1))

with open(os.path.join(OUT,"scores.txt"), "w") as results:
results.write("score_1: {}\n".format(meanrmse))
results.write("score_2: {}".format(maxerr))




### Terms and Conditions

By participating in this challenge, each participant agrees to:

The DL-sparse-view CT challenge is organised in the spirit of cooperative scientific progress. The following rules apply to those who register a team and download the data:

• Anonymous participation is not allowed.
• Entry by commercial entities is permitted, but should be disclosed.
• Entry by multiple participants from the same group is permitted, but all participants from the same group need to register under the same team name. To do this, go to your profile (upper right corner), select Settings, and enter your team name under "Competition settings".
• Once participants submit their outputs to the DL-sparse-view CT challenge organizers, they will be considered fully vested in the challenge, so that their performance results will become part of any presentations, publications, or subsequent analyses derived from the Challenge at the discretion of the organizers.
• The downloaded datasets or any data derived from these datasets, may not be given or redistributed under any circumstances to persons not belonging to the registered team.
• The full data, including reference data associated with the test set cases, is expected to be made publicly available after the publication of the DL-sparse-view CT Challenge. Until the official public release of the data, data downloaded from this site may only be used for the purpose of preparing an entry to be submitted for the DL-sparse-view CT challenge. The data may not be used for other purposes in scientific studies and may not be used to train or develop other algorithms, including but not limited to algorithms used in commercial products.

### Training

Start: March 17, 2021, midnight

Description: Submit zipped prediction.npy file containing 10 images to test the submission and scoring system

### Validation

Start: March 31, 2021, midnight

Description: Submit zipped prediction.npy file containing 10 images to obtain score for predicting 10 unknown validation images

### Test

Start: May 17, 2021, midnight

Description: Submit zipped prediction.npy and algorithm summary (pdf, docx, or doc) files. Prediction.npy contains one hundred images to obtain score for predicting one hundred unknown test images

### Competition Ends

June 1, 2021, midnight

You must be logged in to participate in competitions.