Tomographic data processing in true real-time

Introduction

The prototype of TOF-PET scanner, constructed by the J-PET collaboration, consists of 192 plastic scintillators, each equipped with photomultipliers (PMTs) on both ends. This results in 384 analog channels that have to be processed by the Data Acquisition System (DAQ). The J-PET prototype will be used for investigations in the field of medical imaging, nano-biology, material science and for testing of fundamental symmetries in physics. In this paper, we present a complete solution for the DAQ system together with the readout mechanisms. Detector readout chain is constructed out of Front-End Electronics (FEE), measurement devices like Time-to-Digital or Analog-to-Digital Converters (TDCs or ADCs), data collectors and storage. Most of the PET scanners include coincidence units or multi-level trigger logic in order to discard in the real-time data classified as background noise. Such a unit can only execute low-level selection algorithms in order to fulfill real-time regime. Applying low-level rejection filters can result in the loss of fraction of valuable data. On the other hand, more complex algorithms might introduce longer dead-time of the DAQ system, resulting in reduced rate of registered events. Our solution for the continuous data recording (trigger-less) is a novel approach in such detector systems and assures that most of the information is preserved on the storage for further, high-level processing. The core of the presented system is based on Trigger Readout Board v3 (TRBv3) platform, developed for and widely used in high energy physics experiments.

 

Photo1: A view at the full-scale JPET tomograph prototype

System Overview

The main element of the J-PET DAQ system is the collection of TRBv3 modules. Those are high-performance and FPGA-based (Field Programmable Gate Array), therefore reconfigurable, TDC readout boards. Each module is equipped with five Lattice ECP3M devices. The central one serves as the controller and local data collector, while the remaining four can be configured with configware providing various functionality. For the precise time measurement, a design providing 48 input channels and time resolution of 12 ps has been developed. Each input channel has rising and falling signal edge detection and a buffer for up to 54 complete signals storage between two consecutive readouts. One TRBv3 module is called Master and controls the readout procedure and synchronization of all the other modules (Slaves). Each TRBv3 board has an individual Gigabit Ethernet (GbE) link for transmitting collected measurement data out of the system for storage with the use of standard and cheap network facility. An additional module, called Central Controller Module (CCM) has been developed in order to provide efficient, online data processing. The module has sixteen GbE links as inputs from the Slave modules and a Xilinx Zynq-7045 as the processing unit. Data packages sent from the Slaves can be directed through the CCM, which can perform online analysis, histogramming and data quality assessment. The data is saved on the Event Building (EB) machines. Those are server class, multiprocessor PCs running software for collecting data fragments from the network and reassembling them into complete data units called Events, representing the state of the entire detector in a particular period of time. Such files are taken as the input into the analysis and image reconstruction algorithms.

 

Photo2: A view at the Data Acquisition System for JPET

Central Controller Module

The described readout type results in a significant constant stream of data that has to be processed. The maximum throughput that can be achieved reaches 8 Gbps. This amount of data can be efficiently distributed for storage over a number of Event Building machines running in parallel and connected with 10G Ethernet network. However the high event rate and the distributed data storage makes the designed system not suitable for online analysis. To overcome this drawback and additional module called Central Controller Module (CCM) has been designed and developed. The Central Controller Module provides a computing facility for online analysis and data quality assessment. The board features a Xilinx Zynq-7045 FPGA device, which is a hybrid of FPGA resources and an ARM processor. The architecture of FPGA devices allows for natural parallelism for processing multiple data streams, while the standard processor provides a convenient access to the results. The module can be used as the "board-in-the-middle", meaning it can receive the packet streams from Slave modules, perform analysis and forward the original packets further, to Event Building machines. TRBv3 data format parsers and feature extraction algorithms have been implemented as the foundation for higher-level data analysis.

 

 

Photo3: A view at the Central Controller Module, equipped with Xilinx Zynq SoC

 

Real-time Image Reconstruction

Real-time image reconstruction, performed on hardware level is a novel approach that the concept has been proved by this project. FPGA devices nowadays have enough resources in order to perform complex processing, leading to image reconstruction that can provide several benefits:

  • Reduction of the scan time
  • Instant feedback to the operator
    • ​​Focus on particular body parts
    • Adjust measurement parameters
  • ​On fly correction parameters application 
  • Self-calibrationg scanners
  • Instant identification of device malfunction

 

Benefits from natural parallelism and streamlined processing on FPGA devices, allows to implement innovative processing mechanisms that introduce no dead-time and produce tomographic images with a minimal latency, providing real-time visualization.

 

Block diagram presenting the processing flow. Output from the decomposition channels is combined into a single data bus and then distributed to a number of processing streams. Each processing stream performs an algorithm and can consist of many, pipelined modules. The modules can communicate between themselves in order to synchronize the processing flow.

 

Below, you can see, first and unique example of real-time reconstruction and visualization:

 

 

Contact person:

Grzegorz Korcyl

grzegorz.korcyl (at) uj.edu.pl