In 1917, the Austrian mathematician Radon formulated his theory about the possibility of reconstructing a two-dimensional function starting from its line integrals. It took more than fifty years to get to the first prototype of CT equipment, as we know by Hounsfield. After a first implementation of reconstruction with iterative algebraic algorithms, the filtered back projection (FBP) dominated the scene for 40 years, with some variations introduced to cope with spiral acquisition (interpolation of raw data) and multislice (three dimensional back projection). Since 2009, we have witnessed the introduction and rapid implementation of new iterative algorithms.
In CT equipment, for each angle of the x-ray tube and for each sampling of the detector data, attenuation profiles are acquired. The complete set of attenuation profiles obtained by a complete rotation of the detector tube assembly represented in an image is what is called the sinogram. The superposition of the projections of the attenuation profiles along the acquisition directions can give a rough first approximation of the tomographic plane with evident noise (where constant pixel values are expected they oscillate around an average value), artifacts (presence of signal different from reality), and spread of the signal. In order to partially overcome this problem, a filtering of the attenuation profiles is performed which is a convolution product with a filter function (kernel) accentuating the edges of the profile. In this way, there will be a greater definition of the object in the filtered back projection and a lower dispersion of the signal in the surrounding regions.
An important aspect of the filtered back projection is that the result we obtain depends strongly on the choice of the convolution filter. Depending on the case, we can favour noise reduction at the expense of spatial resolution (with a standard or even smoothing filter) or favour spatial resolution by accepting a higher level of noise (with an edge enhancing filter). With the filtered back-projection we obtain an approximation of the balanced reality in terms of resolution and noise.
From a mathematical point of view, the filtered reconstruction process can also be seen in the Fourier space. In the Fourier formulation of the filtered back projection, the convolution filtering becomes a ramp filtering in the frequency domain.
In the first decade of this century, the technological development of CT scans was mainly aimed at increasing the number of slices that can be acquired simultaneously with a single rotation. This is particularly useful for cardiac examinations. The number of slices progressively increased from 4 to 16, then 64, and further to 256 layers and over a volume of length 16 cm and with rotation times of the detector tube assembly of less than 0.3 s. Some tomographs have 2 x-ray tubes and two arcs of detectors placed at 90° to improve temporal resolution. Once these performances were achieved, the research focused more on the possibilities of improving image quality and reducing the dose.
Figure 1. (a) Schematic view of a third generation tube detector assembly, common to all present CT equipment, with four rows of detectors. The number of detectors along the arc is of the order of one thousand. The nominal slice and radiation beam thickness are defined at the rotational axis distance, that is usually about 0.5 m from the focal spot. As a consequence, a slice thickness of 0.6 mm corresponds to a detector thickness along z axis of about 1.2 mm. (b) Representation of a spiral acquisition with the trajectory of the four banks of detectors around the patient.
Work has been done on increasing the performances of the detectors in terms of efficiency and response times. One manufacturer has developed a double-layer detector that discriminates between the energy of incident photons to create double-energy images. Photon count detectors are also being studied, even if important technological limits still need to be overcome before these can be included in clinical practice. From the point of view of the X-ray tubes, the range of kV available has expanded, which now ranges from 60 to 140 kV with indications on the preferred values to be used for the particular clinical application. Higher filtrations and small focal spots are available, even for high anodic currents.
The two elements that have had the greatest impact on dose reduction in CT diagnostics are the anodic current modulation systems and iterative algorithms, and these will be discussed in the following paragraphs[98].
With regard to the TC equipment used in the hybrid SPECT/CT and PET/CT machines, the number of slices commonly used is between 2 and 64 on a maximum scan length per rotation of 4 cm. A manufacturer offered a CT machine with cone beam for a period of time, but it is currently no longer in production. The kV values are between 80 and 140, and the maximum anode current can exceed 800 mA. In some cases, explicitly “non-diagnostic” CTs are implemented, in the sense that their function is limited to use for attenuation correction, scatter modelling, volume definition, and for the anatomical location of the radiopharmaceutical, although it cannot be used autonomously as CT. In this case the anodic current values are limited to some tens of mA.
Two different dose indicators, specifically defined for computed tomography, are normally used in order to compare different protocols and to assess diagnostic reference levels. The computed tomography dose indicator (CTDI) is a local dose indicator which quantifies the absorbed dose in a standard phantom (with a diameter of 32 cm for body scans and of 16 cm for head scans) for contiguous axial scans or helical scan. It intrinsically accounts for primary beam and scatter contributions. The dose length product (DLP) accounts for both the local absorbed dose and the extension of the acquired volume. It can be obtained in most cases simply by multiplying the CTDI by the scan length[99]. Both CTDI and DLP are displayed together with the other exposure parameters of each selectable acquisition protocols. Typical values of CTDI for diagnostic CT are about 60 mGy in the head region and about 10-15 mGy in the body district. Correspondent DLP values are of the order of 1000 mGy cm for a skull acquisition and of 400-600 mGy cm for a chest or abdominal scan. Regarding the CT acquisitions functional to the nuclear medicine methods, several studies have been published with a wide range of values for the CTDI and DLP indicators, which in any case are typically less than half of the values reported for diagnostic CT[100–103].
It must be clear that the CTDI indicator, despite having an undisputed utility in the definition and comparison of the acquisition protocols and being referred to cylindrical phantoms of standard size, does not correctly represent the absorbed dose of the patient’s irradiated organs which depend greatly on their actual anatomical dimensions[104]. In order to compare the relative radiation risks of the CT acquisition and of the radiopharmaceutical, effective dose is typically employed. Effective dose for CT examinations can be roughly evaluated by means of conversion factors applied to DLP values[105], or more properly with greater accuracy by dedicated software[106–108].
In the latest generation of TC equipment, an automatic modulation system of the x-ray tube current is available which, on the basis of a pre-set quality index or a nominal value of mA, modifies the intensity of the beam during the acquisition so as to maintain the image quality constant with a reduction of the dose to the patient [109,110].
There are several current modulation techniques:
The presence of automatic modulation alters all known relationships between the dose and other exposure parameters in presence of a constant anodic current, so it is important to consider the behaviour of the system when other scanning parameters such as kV, the pitch, or the combinations of multi-bank detectors are modified with respect to the basic protocol. This information is essential for the adoption of optimization strategies that take into account all possible variables[111].
Automatic current modulation is particularly useful in hybrid scanners employed for both paediatric and adult patients, where a proper definition of the needed level of noise for each age class results in a significant dose reduction and image quality optimization[100].
Figure 2. Different types of current modulation obtained with an anthropomorphic phantom. (a) Z axis modulation: it is evident the peak in correspondence at the shoulders, a decrease in the chest region, and relatively higher values in the upper abdomen. (b) angular modulation: the maximum values are constant, and the minimum value depends on the relative difference between the anterior and the lateral thicknesses. (c) XYZ tube current modulation.
The benefits associated with the new generation iterative methods are the following [112]:
About the disadvantages, it is important to highlight:
In commercial algorithms, the structure of the implementation is not generally known. In most cases, it is a black box where only the final effects can be evaluated. A possible classification is to distinguish the algorithms in which the iterative cycle occurs after the filtered back projection exclusively in the image domain from those in which the iterative cycle is implemented in the domain of raw data (sinograms) and then eventually also in the image, and finally from those in which the iterative cycle concerns the entire reconstruction process with multiple comparisons between forward projection and back projection[115].
The iterative algorithms in the image domain exploit denoising algorithms characterized by noise reduction capabilities while preserving spatial resolution. An important point is that they should use additional information related to the acquisition process, such as photon statistics on the directions of the different views. Otherwise, they should not be classified as iterative reconstruction algorithms but rather as post processing filters.
Methods working in the sinogram domain apply a noise reduction filter with resolution maintenance directly on the raw data, considering Poissonian statistics and then applying a filter with higher intensity (such as with a wider smoothing kernel) in the data of lower intensity (greater attenuation) and of lower intensity on the data related to projections with less attenuation. It can subsequently be reconstructed with FBP or iterative. The effect on spatial resolution and on detectability is non-linear and not entirely predictable[116].
The algorithms that operate in the entire reconstruction process are based on modelling the calculation of the attenuation profiles starting from the current image (the so-called forward projection), and through the comparison with the real attenuation profiles they calculate the corrections to be applied iteratively to the image (Fig. 3). The question in this case is what we consider in the calculation of the attenuation profiles and in the comparison with the reals. We talk about a statistical method, when we consider the contributions of Poisson and electronic noise in the definition of the calculated profiles, basically with algebraic methods, and in calculating the corrections we will attribute a lower weight to the projections affected by high noise[117]. Instead, we are talking about model-based methods, when we consider the different real aspects of the physical process of signal generation and acquisition, e.g. finite size of the focal spot, of the voxel and of the detectors, the contributions of the scatter, and so on. In this second case, a world of different possibilities opens. Obviously, more elements of the real world are actually considered, and more the calculation process becomes complex and long. To improve the spatial resolution, different methods of discretization can be considered subtler than the volume of voxels to be calculated, for example by halving the linear dimension of the voxels, or considering density blob functions. For beam components, it is possible to switch from single lines to multiple lines or parallel bands or diverging bands and taking into account the focal spot size and detectors. Increasing the complexity, algorithms have been developed to model scatter radiation using analytical methods or Monte Carlo so the effects related to the energy spectrum can be considered, for example correcting the artefacts from hardening of the beam[118]. The performances of the model based algorithms are significantly better than those obtained with statistical algorithms, as shown by some comparative studies[119,120].
The iterative algorithms in CT find important applications in the particular situations indicated by the terms data sparsity and compressed sensing, in other words, the situations where there is a lack of acquisition data with a regular structure (e.g. reduction of the number of views) or with unstructured mode (e.g. partial rotation angle, absence of data to the detector on some views)[121].
Figure 3. Schematic flow chart of an iterative algorithm operating in the entire reconstruction process, with multiple forward and backward projections. When the comparison between the calculated and the acquired profiles gives differences below a predefined threshold, or after a maximum number of iterations, the final image is achieved.
Common to most commercial iterative algorithms is the possibility to choose the intensity level of the iterative, so that the final image can be defined as a linear combination of the FBP image and the image with the maximum contribution of the iterative. For some producers, there is talk of an iterative percentage, for others, of strength or level. During the installation phase, the application specialist typically proposes initial iterative levels for the different protocols, and afterwards, it is possible to optimize the iterative level[122]. In nuclear medicine equipment, where the main purpose of CT images are attenuation corrections and anatomical localization, high levels of iterative reconstruction contributions with significant dose reductions are applicable[101–103].
Metal artefact reduction algorithms
The accuracy of CT images is often affected by the presence of artefacts of different origins, such as noise, beam hardening, scatter, motion, cone beam, ring, and metal artefacts. These last are the most common, and they are caused by metallic implants within a patient’s body: dental fillings, orthopaedic prosthesis, pacemakers, and cardiac defibrillators. The high attenuation of the x-ray beam across these devices determines a gap in the projection data, a loss of the information of the actual attenuation of the structures close to the metallic implants, and the resulting appearance of streaking artifacts and void regions on the reconstructed CT image [123].
The use of CT images with streaking artefacts in CT-based attenuation correction of PET data can cause a propagation of the artefacts into the corresponding PET images in the form of over-estimated or under-estimated activity concentration regions which might lead to a false diagnosis of the disease.
Also, in diagnostic radiology and in radiotherapy, the presence of metal artefacts can lead to substantial errors. For these reasons, several metal artefact reduction (MAR) strategies have been developed in order to limit the impact of these artifacts and to improve the accuracy of density values in the regions surrounding metallic elements. They can be divided into implicit methods, with the choice of the best exposure parameters to limit the impact of the metallic element, and explicit approaches, using software reconstruction and processing algorithms. Implicit approaches include gantry tilt to exclude the metallic object from the tomographic plane as well as the increase of beam energy and intensity. The most common methods belong to the explicit category, with various techniques either in the sinogram or image domains. A recent review on a broad range of MAR methods showed that the sinogram-based methods seem, in general, more accurate than imaged-based (27).