DL-spectral CT

Organized by challenge-organizer - Current server time: May 30, 2023, 8:27 p.m. UTC

First phase

Training
March 7, 2022, midnight UTC

End

Competition Ends
June 1, 2022, midnight UTC

Overview

The American Association of Physicists in Medicine (AAPM) is sponsoring a “Grand Challenge” on deep-learning for image reconstruction leading up to the 2022 AAPM Annual Meeting. The DL-spectral CT challenge follows up on last year’s DL-sparse-view CT challenge. DL-spectral CT will provide an opportunity for investigators in CT image reconstruction, using data-driven or iterative techniques, to compete with their colleagues on the accuracy of their methodology for solving the inverse problem associated with spectral CT acquisition. A session at the 2022 AAPM Annual Meeting will focus on the DL-spectral CT Challenge; an individual from each of the two top-performing teams will receive a waiver of the meeting registration fee in order to present their methods during this session. Following the Annual Meeting, challenge participants from the five top-performing teams will be highlighted in a challenge report.

Background

Last year’s DL-sparse-view CT challenge [1] tested the ability of deep-learning-based image reconstruction to solve a CT-related inverse problem for which a solution is known to exist, by exploiting gradient sparsity in the true images. For the 2022 DL-spectral CT challenge, we address an inverse problem that is of current interest and is challenging for any image reconstruction technique. Background information on the DL-spectral CT challenge with the details of the spectral CT physics modeling will be included on the challenge website.

 

Briefly, this challenge will focus on a form of spectral CT known as dual-energy CT, where the subject is scanned by X-ray beams with two different spectra. One way to accomplish such a scan is to vary the kilovoltage peak (kVp) as the X-ray source circles the scanned subject; in particular this challenge models an ideal form of fast kVp-switching where the X-ray tube voltage alternates between two settings for consecutive projections. For each kVp setting, the beam spectrum is broad and the polychromatic nature of the X-ray beam makes quantitative imaging in CT quite challenging when only one kVp setting is used. With dual-energy CT, i.e. two kVp settings, it is possible to reconstruct quantitative images. If the subject is known to be comprised of only two tissues, the density map of each tissue can in fact be reconstructed accurately from dual-energy CT data. This challenge focuses on reconstructing tissue maps from dual energy CT data when there are three tissues present in the scanned subject.

 

As with DL-sparse-view-CT, the DL-spectral-CT set-up models the breast CT application. Dual-energy CT scans are simulated for a breast model [2] that contains three tissue types: adipose, fibroglandular, and calcification. The calcifications are assumed to be composed of hydroxyapatite [3]. Truth images and simulated data will be provided for 1000 cases so that participants can decide to either use a data-driven or optimization-based technique.

The DL-spectral-CT challenge seeks the image reconstruction algorithm that provides the most accurate reconstruction of the adipose, fibroglandular, and calcification spatial maps from dual-energy CT data.

[1] E. Y. Sidky and X. Pan, “Report on the AAPM deep-learning sparse-view CT grand challenge,” Medical Physics (early access: https://doi.org/10.1002/mp.15489), 2022.

[2] I. Reiser and R. M. Nishikawa, ”Task-based assessment of breast tomosynthesis: Effect of acquisition parameters and quantum noise,” Medical Physics, Vol. 37, pp. 1591-1600, 2010. (http://doi.org/10.1118/1.3357288)

[3] B. Ghammraoui, Ah. Zidan, A. Alayoubi, As. Zidan, and S. J. Glick, “Fabrication of microcalcificiations for insertion into phantoms used to evaluate x-ray breast imaging systems,” Biomedical Physics and Engineering Express, Vol. 7, article number 055021, 2021. (https://doi.org/10.1088/2057-1976/ac1c64)
 

Objective

The overall objective of the DL-spectral CT challenge is to determine which deep-learning or optimization-based technique provides the most accurate recovery of a three-tissue test phantom from ideal 512-view dual-energy transmission data with no noise. To this end, we will provide 1000 data/tissue-map pairs based on a 2D breast CT simulation that can be used for training the algorithm. During the validation phase, 10 cases without the true tissue maps will be provided so that participants can try out their algorithms in an unlimited fashion and post results on a leaderboard. In the final two weeks of the challenge -- the test phase -- testing data will be provided for 100 cases without the corresponding ground truth images. Participants will submit their reconstructed tissue maps for these testing cases.

 

Get Started

  1. Register to get access via Participate Tab

  2. Download the data/image pairs after approval

  3. Train or develop your image reconstruction algorithm

  4. Download the testing data

  5. Submit your results

 

Important Dates

  • Mar 17, 2022: Training set release

  • Mar 31, 2022: Release of 10-case validation set for the challenge leaderboard

  • May 17, 2022: 100-case Testing set release

  • May 31, 2022: Final submission of test results (midnight UTC, 5pm PST)

  • June 7, 2022: Top two teams invited to present at challenge symposium 

  • July 10-14, 2022:  Grand Challenge Symposium, AAPM 2022 Annual Meeting

  • August 2022:  Challenge report will be drafted and datasets made public

 

Results, prizes and publication plan

At the conclusion of the challenge, the following information will be provided to each participant:

  • The evaluation results for the submitted cases

  • The overall ranking among the participants

The top 2 participants: 

  • Will present their algorithm and results at the annual AAPM meeting

  • Will receive complimentary registration to the AAPM meeting

  • Will receive a Certificate of Merit from AAPM

A manuscript summarizing the challenge results will be submitted for publication following completion of the challenge which will summarize the approaches taken by participating teams and highlight the top five performing algorithms. Note that participants will not be coauthors on the challenge report; the report will only briefly describe results and methods so that participants can publish their own manuscripts based on the DL-spectral CT challenge.

 

Organizers and Major Contributors

  • Emil Sidky (Lead organizer) (The University of Chicago)

  • Xiaochuan Pan (The University of Chicago) 

  • Samuel Armato and the AAPM Working Group on Grand Challenges

Contact

For further information, please contact the lead organizer, Emil Sidky (sidky@uchicago.edu).

Prepare submission:

  1. Create zip file:
    • algorithm_summary.[any one of txt, doc, docx, pdf]
    • Prediction_Adipose.npy
    • Prediction_Calcification.npy
    • Prediction_Fibroglandular.npy

    In the training and validation phases, the ground truths your submissions will be compared against are 10x512x512 in float32, containing ten images (first 10 from training and all 10 from validation). In the testing phase predictions.npy is 100x512x512. For the all phases the algorithm report has acceptable formats of pdf, docx, and doc. Different filenames than those in the examples are not acceptable. Start a forum discussion if you believe you need to use another.


    In the validation phase, it is fine to submit a placeholder algorithm_summary.txt document which only has a few words and no algorithm description.


  2. Submit the zip file
    Under the "Participate" tab select "Submit/View Results".
    Click on the "Submit" button and upload the prepared zip file.
    It should take less than a minute to process the submission, and in the training and validation phases the scoring log can be viewed.
    After receiving a score, the score can be submitted to the leaderboard.

Under the "Participate" tab, select "Files" and click on the "Starting Kit" button. In there is a zip file called sample_training_submission_txt.zip. If you submit these images you should get an RMSE score of 0.0199964586645 and worst-case ROI RMSE of 0.0214029662311. The purpose of submitting in the training phase is to practice submission and to understand the scoring system. There are unlimited submissions in the training and validation phases for all individuals even if they are on the same team.

In the testing phase, each team will be allowed three submissions. (Failed submissions do not count toward the three submissions)

 

Quantitative evaluation:

Submitted test-phase reconstructed tissue-map images will be evaluated by computing

  • Root-mean-square-error (RMSE) averaged over the 100 test images:

    \( {1\over100} \sum_{i=1}^{100} \sqrt{\|t_i-r_i\|_2^2 \over 3n} \) where r and t represent the triplet of reconstructed and test tissue-map images, respectively, and n is the number of pixels in a single image.
  • Worst-case 25x25 pixel ROI-RMSE over all 100 test tissue-map images:

    \( \underset{i,c}{max} \sqrt{\|b_c (t_i-r_i)\|_2^2 \over 3m} \) where \(b_c\) is a masking indicator function for the 25x25 ROI centered on coordinates c in the image, m=625 is the number of pixels in the ROI.

Mean RMSE will be the primary metric that determines the algorithm ranking. In case of a numerical tie (to be determined based on the distribution of results with the mean RMSE), the worst-case ROI-RMSE will be used to rank the algorithms.

 

The above methods evaluate your submissions as follows:

metrics.py

    
# This program demonstrates the mean RMSE and worst-case ROI RMSE metrics
    # The 128-view fbp images serve as the test images
    # The phantom images are taken as the ground truth
    
    import numpy as np, sys, os, time
    import pdb
    
    # production
    INPUT = sys.argv[1] # INPUT which has both ./ref and ./res - user submission
    OUT = sys.argv[2] # OUTPUT
    
    REFERENCE = os.path.join(INPUT, "ref") # Phantom GT
    PREDICTION_OUTPUT = os.path.join(INPUT, "res") # not FBP, but hopefully phantom like
    
    # Will try to recreate these. They serve as the input data.
    if len(os.listdir(REFERENCE)) == 3:
        # phantom_gt_file_name = os.listdir(REFERENCE)[0] # old
        # images are the ground truth
        images = np.array([np.load(os.path.join(REFERENCE,"Phantom_Adipose.npy")),
                           np.load(os.path.join(REFERENCE,"Phantom_Fibroglandular.npy")),
                           np.load(os.path.join(REFERENCE,"Phantom_Calcification.npy"))]
                         )
    else:
        raise Exception('Organizer, we need 3 *.npy files')
    
    #QA for algorithm summary and images
    acceptable_file_types = [".pdf",".docx",".txt",".doc",".npy"]
    if len(os.listdir(PREDICTION_OUTPUT)) <= 3:
        raise Exception('Remember for this phase we need 3 .npy files and one of a {} with your algorithm summary'.format(acceptable_file_types))
    elif len(os.listdir(PREDICTION_OUTPUT)) == 4:
        adipose_file_name = ""
        fibroglandular_file_name = ""
        calcification_file_name = ""
        for file in os.listdir(PREDICTION_OUTPUT):
            if file.find("_Adipose.npy") != -1:
                adipose_file_name = file
            elif file.find("_Fibroglandular.npy") != -1:
                fibroglandular_file_name = file
            elif file.find("_Calcification.npy") != -1:
                calcification_file_name = file 
            elif file[-4:] in acceptable_file_types or file[-5:] in acceptable_file_types and file[-4:] != ".npy":
                algorithm_summary = file
            else:
                raise Exception('Remember for this phase we need one of a {} with your algorithm summary. This error suggests you have multiple files, but that your algorithm summary does not have the right extension or you are missing this.'.format(acceptable_file_types))
        testimages = np.array([np.load(os.path.join(PREDICTION_OUTPUT, adipose_file_name)),
                                np.load(os.path.join(PREDICTION_OUTPUT, fibroglandular_file_name)),
                                np.load(os.path.join(PREDICTION_OUTPUT, calcification_file_name))]
                                )
    else:
        raise Exception('Maybe ask a question in the forum as your submission has neither 1 file, 2 files or more than two files.')
    
    # pdb.set_trace()
    nmat, nim, nx, ny = images.shape     #get the number of materials, images and number of pixels in x and y
    
    # BB
    # >>> nmat, nim, nx, ny
    # (3, 10, 512, 512)
    
    # mean RMSE computation
    diffsquared=(images-testimages)**2
    meanrmse  = np.sqrt( (((diffsquared/float(nmat*nx*ny)).sum(axis=3)).sum(axis=2)).sum(axis=0) ).mean()
    print("The mean RSME over %3i images is %8.6f "%(nim,meanrmse))
    
    
    # worst-case ROI RMSE computation
    roisize = 25  #width and height of test ROI in pixels
    x0 = 0        #tracks x-coordinate for the worst-case ROI
    y0 = 0        #tracks x-coordinate for the worst-case ROI
    im0 = 0       #tracks image index for the worst-case ROI
    
    def compute_maxroierror(images,testimages):
       maxerr = -1.
       for i in range(nim):
          print("Searching image %3i"%(i))
          im =  images[:,i].copy()
          imtest = testimages[:,i].copy()
          for ix in range(nx-roisize):
             for iy in range(ny-roisize):
                roi =  im[:,ix:ix+roisize,iy:iy+roisize].copy()
                roitest =  imtest[:,ix:ix+roisize,iy:iy+roisize].copy()
                if roi.max()>0.01:     #Don't search ROIs in regions where the truth image is zero
                   roirmse = np.sqrt( (((roi-roitest)**2)/float(nmat*roisize**2)).sum() )
                   if roirmse>maxerr:
                      maxerr = roirmse
                      x0 = ix
                      y0 = iy
                      im0 = i
       return maxerr,x0,y0,im0
    
    t1 = time.time()
    maxerr,x0,y0,im0 = compute_maxroierror(images,testimages)
    print("Worst ROI search took: ,",time.time()-t1," seconds")
    
    print("Worst-case ROI RMSE is %8.6f"%(maxerr))
    print("Worst-case ROI location is (%3i,%3i) in image number %3i "%(x0,y0,im0+1))
    
    with open(os.path.join(OUT,"scores.txt"), "w") as results:
       results.write("score_1: {}\n".format(meanrmse))
       results.write("score_2: {}".format(maxerr))
       #results.write("score_1: {}\n".format('0.00003000'))
       #results.write("score_2: {}".format('0.09748530'))
    

Terms and Conditions

By participating in this challenge, each participant agrees to:

The DL-spectral CT challenge is organised in the spirit of cooperative scientific progress. The following rules apply to those who register a team and download the data:

  • Anonymous participation is not allowed.
  • Participants from the host institution and colleagues of the organizers may participate, but they will not be eligible for prizes or recognition as a ranking participant.
  • Entry by commercial entities is permitted, but should be disclosed.
  • Once participants submit their outputs to the DL-spectral CT challenge organizers, they will be considered fully vested in the challenge, so that their performance results will become part of any presentations, publications, or subsequent analyses derived from the Challenge at the discretion of the organizers.
  • The downloaded datasets or any data derived from these datasets, may not be given or redistributed under any circumstances to persons not belonging to the registered team.
  • The full data, including reference data associated with the test set cases, is expected to be made publicly available after the publication of the DL-spectral CT Challenge. Until the official public release of the data, data downloaded from this site may only be used for the purpose of preparing an entry to be submitted for the DL-spectral CT challenge. The data may not be used for other purposes in scientific studies and may not be used to train or develop other algorithms, including but not limited to algorithms used in commercial products.

Training

Start: March 7, 2022, midnight

Validation

Start: March 31, 2022, midnight

Test

Start: May 17, 2022, midnight

Competition Ends

June 1, 2022, midnight

You must be logged in to participate in competitions.

Sign In