| { | |
| "title": "\u2113__\u20621DecNet+: A new architecture framework by \u2113__\u20621 decomposition and iteration unfolding for sparse feature segmentation", | |
| "abstract": "based sparse regularization plays a central role in compressive sensing and image processing. In this paper, we propose DecNet, as an unfolded network derived from a variational decomposition model incorporating related sparse regularization and solved by scaled alternating direction method of multipliers (ADMM). DecNet effectively decomposes an input image into a sparse feature and a learned dense feature, and thus helps the subsequent sparse feature related operations. Based on this, we develop DecNet+, a learnable architecture framework consisting of our DecNet and a segmentation module which operates over extracted sparse features instead of original images. This architecture combines well the benefits of mathematical modeling and data-driven approaches. To our best knowledge, this is the first study to incorporate mathematical image prior into feature extraction in segmentation network structures. Moreover, our DecNet+ framework can be easily extended to 3D case. We evaluate the effectiveness of DecNet+ on two commonly encountered sparse segmentation tasks: retinal vessel segmentation in medical image processing and pavement crack detection in industrial abnormality identification. Experimental results on different datasets demonstrate that, our DecNet+ architecture with various lightweight segmentation modules can achieve equal or better performance than their enlarged versions respectively. This leads to especially practical advantages on resource-limited devices.", | |
| "sections": [ | |
| { | |
| "section_id": "1", | |
| "parent_section_id": null, | |
| "section_name": "Introduction", | |
| "text": "Image segmentation is one of the most important middle-level vision tasks, which bridges low-level vision operations and high-level vision applications.\nSo far, people developed lots of conventional methods and learning-based methods. In whatever method, an accurate suitable feature description of each object content is a key. In conventional segmentation methods, object features are explicitly modeled, such as a constant or polynomial intensity function. In learning-based methods, object features are implicitly learned in neural networks and usually lack of interpretability.\nIn this paper, we consider the problem of sparse feature extraction and segmentation from an input image. This problem raises in diverse applications like retinal vessel segmentation and crack detection. To solve this problem, we assume an input image be an addition of a sparse feature and a hard-to-describe dense feature (background). Our idea is to combine mathematical modeling and machine learning spirits, by introducing an decomposition model and using deep unfolding strategy.\nBased on the powerful regularization, also even called \u201cmodern least squares\u201d, we propose a decomposition minimization model, which decomposes an input image to two components, one as the sparse feature and the other as the dense feature (background). The sparse feature component is characterized by an regularization, while the hard-to-describe background component is characterized by an regularization composed by some sparsifying linear transformations which will be learned from training data. After deriving a scaled-ADMM solver for this composite optimization problem, we unfold the iterative scheme to construct a deep neural network to build our DecNet. Then an architecture framework named DecNet+ is developed, by connecting our\nDecNet and an any segmentation module; see Fig. 1 ###reference_###.DecNet extracts a sparse feature well from an input image and deliver the feature to the subsequent segmentation module to finalize the segmentation task. As can be seen, sparsity priors (with or without learnable linear transformations) for feature descriptions are embedded into DecNet+. Therefore, it combines well mathematical modeling and data-driven spirits. To train DecNet+, we use ADAM to minimize a loss function consisting of a segmentation loss and a feature loss.\nWe conduct our experiments of DecNet+ on three datasets DRIVE, CHASE_DB1 and CRACK for sparse feature segmentation. We construct six DecNet+ architectures by using six popular lightweight segmentation networks as the segmentation modules for our experiments. Tests and comparisons show that our DecNet+ has the flexibility on the choice of segmentation module. While achieving an equal or better performance of the compared large network related to the used segmentation module, our DecNet+ architecture uses much less learnable parameters and much less storage occupation for network weights.\nOur contributions are as follows\nBy modeling an input image as an addition of a sparse feature and a hard-to-describe dense feature (background), we propose an decomposition model, where the sparse feature is characterized by an regularization and the dense feature is characterized by an regularization composed by some sparsifying linear transformations.\nBased on a scaled-ADMM solver for the decomposition model and deep unfolding method, we propose a learnable sparse feature extraction network DecNet, where the linear sparsifying transformations for the hard-to-describe dense feature and some model and algorithm parameters are relaxed to be learnable.\nThrough delivering the extracted sparse feature by DecNet to a segmentation module, we further construct an DecNet+ architecture, which can adapt to diverse lightweight segmentation modules to achieve equal or better performances than their enlarged versions respectively, indicating the advantages of integrating mathematical modeling and data-driven strategy." | |
| }, | |
| { | |
| "section_id": "2", | |
| "parent_section_id": null, | |
| "section_name": "II Related Works", | |
| "text": "In this section, we review most related works on (1) regularization in compressive image processing, (2) the variational framework for image decomposition with different regularization towards different tasks, and (3) deep unfolding methods." | |
| }, | |
| { | |
| "section_id": "2.1", | |
| "parent_section_id": "2", | |
| "section_name": "II-A related regularization", | |
| "text": "related regularization is one of the most powerful techniques to characterize sparsity property of signal and image data. It plays a central role in compressive sensing (CS) theory[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###] with extensive applications and was even called \u201cmodern least squares\u201d[4 ###reference_b4###], due to its computational tractability and capability to reconstruct sparse signals from fewer measurements than those required by Shannon-Nyquist sampling theorem. The regularization composed by certain linear transformations helps to design effective minimization models in imaging applications, like total variation (TV) model[5 ###reference_b5###], wavelet frame based approaches[6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###], and lots of subsequent developments (See, e.g., those summarized in [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]).\nrelated regularization, although being convex, is non\nsmooth. A large variety of efforts were devoted to developing efficient optimization algorithms to solve related minimization models, like iterative shrinkage-thresholding algorithm (ISTA)[6 ###reference_b6###], Bregman iterations[12 ###reference_b12###, 13 ###reference_b13###], fast iterative shrinkage-thresholding algorithm (FISTA)[14 ###reference_b14###], alternating direction method of multipliers (ADMM)[15 ###reference_b15###, 16 ###reference_b16###], primal-dual[17 ###reference_b17###, 18 ###reference_b18###], proximity based fixed point iteration[19 ###reference_b19###]; see recent books[20 ###reference_b20###, 10 ###reference_b10###] and references therein.\nWe use related regularization with or without linear transformation to characterize the sparsity of different feature components of an image." | |
| }, | |
| { | |
| "section_id": "2.2", | |
| "parent_section_id": "2", | |
| "section_name": "II-B Variational image decomposition", | |
| "text": "In image processing, one important task is to decompose one single image into two or more useful components. These components illustrate different image structures and benefit for many high-level vision tasks[21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###].\nIn variational framework, the related minimization model should thus consist of two or more regularizers for different components and dates back to infimal convolution techniques[24 ###reference_b24###]. Following [24 ###reference_b24###], people considered various decomposition models. Usually there is one cartoon component regularized by the TV prior[24 ###reference_b24###, 25 ###reference_b25###] or non-convex TV prior[26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###]. The TV prior is a composition of -norm and gradient operators, characterizing the sparsity [29 ###reference_b29###, 1 ###reference_b1###, 30 ###reference_b30###] of an image in gradient domain. The other components depend on applications, such as cartoon-texture decomposition[31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###], Retinex illumination estimation[34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###] and intensity inhomogeneity removal[27 ###reference_b27###, 28 ###reference_b28###] for segmentation. Note that these are not data-driven methods, and model and algorithm parameters therein are set manually." | |
| }, | |
| { | |
| "section_id": "2.3", | |
| "parent_section_id": "2", | |
| "section_name": "II-C Deep unfolding methods", | |
| "text": "From the pioneering work of LISTA (learned ISTA)[38 ###reference_b38###], deep unfolding methods provide powerful tools for the network design on problems equipped with clear physical or mathematical models. The implicit layer-like structures in iterative solvers are mapped into deep architectures. Model and algorithm parameters, and even others, are relaxed to be learnable from data.\nAs far as we know, there are two categories of deep unfolding methods. In the first type, people construct unfolding networks[39 ###reference_b39###, 40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###] from numerical ODE or PDE. In the second type, people unroll some optimization algorithms to construct networks, like [38 ###reference_b38###, 44 ###reference_b44###] from ISTA for sparse coding, [45 ###reference_b45###] from primal dual hybrid gradient algorithm for image reconstruction, [46 ###reference_b46###, 47 ###reference_b47###] from ISTA, [48 ###reference_b48###] from ADMM and [49 ###reference_b49###] from FISTA for CS reconstruction, and [50 ###reference_b50###, 51 ###reference_b51###, 52 ###reference_b52###, 53 ###reference_b53###] from alternating minimization solving (approximate) penalized problems for image restoration or enhancement tasks; See [10 ###reference_b10###] and references therein for more details.\nUnlike previous works, we start from an additive decomposition model and its ADMM solver, and design a learnable architecture for feature extraction and segmentation task." | |
| }, | |
| { | |
| "section_id": "3", | |
| "parent_section_id": null, | |
| "section_name": "III DecNet+ for sparse feature segmentation", | |
| "text": "We now present our DecNet+ architecture for sparse feature segmentation, which performs segmentation operation on the sparse feature extracted by DecNet. Fig. 1 ###reference_### shows the overall structure of DecNet+, DecNet followed by a segmentation module. DecNet decomposes an input image into a dense feature and a sparse feature . The following segmentation module takes only the feature for segmentation. This segmentation module is rather abstract and any current segmentation networks (like UNet and its variants) can be used here. We mention that, for multi-channel input images, each channel is processed by a separate DecNet, and the extracted features of all channels are concatenated and delivered to the segmentation module.\nWe next give the training method for our DecNet+ architecture. Given data pairs with images and labels, we define the training loss as\nwhere\nHere , and are the dense feature, sparse feature and segmentation result of the -th input . The represent learnable parameters in DecNet and segmentation module , and is a positive coefficient.\nThe feature loss aims to pursuit the sparsity assumption on through training. It prevents DecNet from degenerating to identity or all the kernels from degenerating to identity, thus avoids yielding or . Such loss function will be optimized by ADAM algorithm.\nThe DecNet+ architecture and training method combine well the mathematical modeling and data-driven spirit. It leads to a feature-aware segmentation method. With the help of decomposition for sparse feature extraction, it can be expected that our proposed method has better performance and can reduce the amount of trainable parameters. As far as we know, our DecNet+ is the first one to integrate decomposition, the mathematical modeling on sparsity, into design of network blocks." | |
| }, | |
| { | |
| "section_id": "4", | |
| "parent_section_id": null, | |
| "section_name": "IV Experiments", | |
| "text": "In this section, we show our experiments on our DecNet+ architecture and comparisons to others for sparse feature segmentation. We implement DecNet+ networks, training and testing procedures with PyTorch 1.12 and PyTorchLightning 1.7, using the ADAM optimizer and ReduceLROnPlateau learning rate scheduler. The segmentation modules and compared networks are taken from SegmentationMethodPytorch (SMP)[61 ###reference_b61###] without pre-trained weights, and embedded into our training and testing procedures. All experiments are conducted on one GPU device, RTX 4090 24G, on a Ubuntu20.04 server." | |
| }, | |
| { | |
| "section_id": "4.1", | |
| "parent_section_id": "4", | |
| "section_name": "IV-A Datasets", | |
| "text": "Due to limited computational resources, we choose or construct the following small datasets for our experiments, i.e., DRIVE, CHASE (CHASE_DB1) and CRACK.\nDRIVE[62 ###reference_b62###] is a retinal vessel dataset with 40 RGB 565 584 images, and binary labels for 8.63% vessel pixels. We take 20 images for training and the other 20 images for testing.\nCHASE [63 ###reference_b63###] is a retinal vessel dataset with 28 RGB 999 960 images, and binary labels for 7.19% vessel pixels. We take 20 images for training and 8 images for testing.\nCRACK is a pavement crack dataset with 206 RGB images, and binary labels for 0.32% cracks pixels. It consists of crack-centered cropped images of a subset of Cracktree[64 ###reference_b64###]. We take 190 images for training and 16 images for testing.\nFig. LABEL:fig:dataset shows some images and labels from the above three datasets. During the training process, all images are randomly cropped into patches of a uniform size . During the testing process, images will be processed with overlapped moving window of . We use the standard flipping and cropping data augmentation from PyTorch." | |
| }, | |
| { | |
| "section_id": "4.2", | |
| "parent_section_id": "4", | |
| "section_name": "IV-B Choice of the segmentation module in DecNet+", | |
| "text": "The segmentation module (the segmentation subnetwork) of DecNet+ in our experiments uses a UNet[65 ###reference_b65###] of a certain size and with a certain single encoder, or a MANet[66 ###reference_b66###]/UNet++ [67 ###reference_b67###] of a certain size and with fused multiple encoders. Four families of encoding blocks for UNet will be tested, denoted as CNN, Res-x, Eff-x and Mit-x, corresponding to UNet (2015)[65 ###reference_b65###], ResNet (2016)[68 ###reference_b68###], EfficientNet (2019) [69 ###reference_b69###] and MixVisionTransformer (2021) [70 ###reference_b70###]. Besides, one family of encoding blocks for MANet/UNet++ is further tested, denoted as Res-x, corresponding to ResNet. The suffix x indicates the size of the encoding block. Documents of the github repository [61 ###reference_b61###] show details of the structure of the segmentation module. For convenience of description, we abbreviate UNet, MANet and UNet++ as U, MA and Upp. We use subscripts to indicate the size of a segmentation module, and parentheses to indicate the encoding block. For example, (Res18) represents a UNet of 3 down-samplings and 32 start channels with the ResNet18 encoding block. In particular, we test the following six small-scale segmentation modules in our DecNet+ architecture, including (CNN), (Res18), (Effb0), (Mitb0), (Res18) and (Res18)." | |
| }, | |
| { | |
| "section_id": "4.3", | |
| "parent_section_id": "4", | |
| "section_name": "IV-C Hyperparameter settings and learnable parameter initializations", | |
| "text": "Unless otherwise specified, we set hyperparameters and initialize learnable parameters in DecNet+ and its training procedure as follows. In DecNet, we set , and . The learnable parameters are initialized as in Table I ###reference_###, where is a kernel by zero-padding and is a kernel by zero-padding . For the segmentation module in DecNet+ and its enlarged version, we take the default parameter settings and initializations in the literatures. In the training loss, we set ." | |
| }, | |
| { | |
| "section_id": "4.4", | |
| "parent_section_id": "4", | |
| "section_name": "IV-D Evaluation metrics", | |
| "text": "We use AUC (area under the ROC curve) score [71 ###reference_b71###] to evaluate the network performances in all experiments. The AUC score is a number in , and is the higher the better. It is one of the most popular scores especially for binary segmentation and classification tasks in deep learning.\nConsidering the randomness involved in the training process, these scores are calculated by averaging the scores of all the inferences obtained from 5 independent training and testing procedures of the network under the same settings." | |
| }, | |
| { | |
| "section_id": "4.5", | |
| "parent_section_id": "4", | |
| "section_name": "IV-E Experiments on the influence of hyperparameters of DecNet", | |
| "text": "Here we test the influence of hyperparameters , and of DecNet to the performance of DecNet+ architecture, where we use (CNN) as the segmentation module and DRIVE as the dataset. The AUC scores is shown in Table II ###reference_###. We can see that, when is fixed, and have little influence on AUC score; meanwhile, when and are fixed, the AUC score slightly increases and peaks at . Therefore in the following experiments, we use and . As for the kernel number, we set , considering the trade-off of the performance and memory occupation." | |
| }, | |
| { | |
| "section_id": "4.6", | |
| "parent_section_id": "4", | |
| "section_name": "IV-F Experiments on the benefits by DecNet feature extraction for DecNet+ segmentation", | |
| "text": "###figure_1### In this subsection, we show the benefits of DecNet feature extraction for segmentation in DecNet+ architecture, including the flexibility of the choice of segmentation module and the assistance on reducing the scale of the segmentation network.\nWe construct six DecNet+ segmentation architectures by using small-scale segmentation modules, and compare them to their segmentation modules and enlarged versions. In particular, we compare DecNet+ (CNN) to (CNN) and (CNN), DecNet+ (Res18) to (Res18) and (Res34), DecNet+ (Effb0) to (Effb0) and (Effb2), DecNet+ (Mitb0) to (Mitb0) and (Mitb1), DecNet+ (Res18) to (Res18) and (Res34), DecNet+ (Res18) to (Res18) and (Res34), respectively; see Table III ###reference_### and Table IV ###reference_###.\nTable III ###reference_### records the segmentation results in terms of AUC score on DRIVE, CHASE and CRACK datasets, along with the estimated number of learnable parameters of each network by PyTorch. Some segmetation results are shown in Fig. 3 ###reference_### by zoom-in patches of the images and labels from DRIVE, CHASE and CRACK, along with the results from (CNN), (CNN) and DecNet+(CNN). Table IV ###reference_### shows training, inference and storage costs in terms of time per backward-propagation step (time per bp step), number of multiply-accumulate operations in forward-propagation\n(fwd MACs) and disk occupation, of the segmentation networks compared in Table III ###reference_###. Therein, fwd MACs are measured by Python package DeepSpeed (0.11.1, cpupy310) https://github.com/microsoft/DeepSpeed.\nWe can see that, in most cases, DecNet+ architecture with a small-scale segmentation module performs the best, and achieves even better results than the enlarged version of the segmentation module, showing the effectiveness of the DecNet sparse feature extractor and the flexibility of the choice of the segmentation module. We can also observe that DecNet+ framework only introduce 0.02% 1.42% extra learnable parameters (2.5k) to the segmentation module. Overall speaking, the training costs (i.e. time per bp step) are comparable among the three models, DecNet+ architecture, its segmentation module and enlarged version; while the inference cost (i.e. fwd MACs) and storage requirement (i.e. disk occupation) of our DecNet+ architecture is lower than the enlarged version of its segmentation module. These advantages arise from the fact that DecNet can extract sparse features well and thus consistently help the subsequent segmentation procedures.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7###" | |
| }, | |
| { | |
| "section_id": "4.7", | |
| "parent_section_id": "4", | |
| "section_name": "IV-G Comparison of the evolution of the segmentation inference during training procedure", | |
| "text": "In this subsection, we compare the evolution of the segmentation inference during training procedure between our DecNet+ architectures and other models. Such intermediate results are generated after training procedure, based on stored weights of networks at some specific epochs. We take the weights of networks at 100th, 300th, 600th and 1200th epochs of training on DRIVE and CHASE datasets, and 50th, 100th, 200th, 400th epochs of training on CRACK dataset.\nIn Fig. 4 ###reference_###, we show a comparison among (CNN), (CNN) and DecNet+(CNN) as a representation of the networks tested in Table III ###reference_###. On DRIVE and CHASE datasets, Fig. 4 ###reference_### shows that, (CNN) and (CNN) suffer an obvious illusion (the edge of the field of view) in the whole training procedure, while our DecNet+(CNN) keeps illusion-free; On CRACK dataset, it shows that the three models have similar behavior. Fig. 5 ###reference_### illustrates a comparison among (Mitb1), (Mitb0) and DecNet+(Mitb0) as a special case that a simply enlarged model may collapse. It can be observed that the simply enlarged model (Mitb1) takes more than 600 epochs to generate reasonable results on CHASE dataset, and even fails on CRACK dataset. Our DecNet+ architecture with various segmentation modules performs well uniformly on three datasets. The reason is that our DecNet+ architecture combines well the mathematical modeling and data-driven approaches." | |
| }, | |
| { | |
| "section_id": "4.8", | |
| "parent_section_id": "4", | |
| "section_name": "IV-H Validation of the sparse feature from DecNet+", | |
| "text": "###figure_8### As we know, the entry values of a sparse vector obey a Laplacian distribution, which indeed induces the regularization in minimization models[72 ###reference_b72###]. In this subsection, we validate that the component of the DecNet output does obey approximately a Laplacian distribution after network training.\nWe use the experiment results of DecNet+(CNN) on DRIVE,CHASE and CRACK datasets. We compute the histograms of the gray intensities of some examples of input , the histograms of those output feature and their Laplacian fitting curves. In Fig. 6 ###reference_###, we take random 8 patches of the first image of each testing subset as the example for histogram illustration, where the distributions of patches of from DRIVE, CHASE and CRACK are quite different; see the first, third and fifth rows. The second, fourth and sixth rows show the histograms of their feature calculated by trained DecNet+(CNN) with Laplacian fitting curves by SciPy package. We can see that the Laplacian distribution prior of the feature is approximately kept through deep unfolding and network training, no matter what distribution follows." | |
| }, | |
| { | |
| "section_id": "5", | |
| "parent_section_id": null, | |
| "section_name": "Conclusion and discussion", | |
| "text": "In this paper, we proposed an unfolding network, namely DecNet, from an regularized variational decomposition model and its scaled-ADMM solver. DecNet outputs two components, one is a sparse feature characterized by an regularization, and the other is a dense feature (hard-to-mathematically-describe background) characterized by related regularization with some learnable sparsifying transformations. We also further constructed DecNet+, a learnable architecture framework for sparse feature segmentation, by connecting DecNet and some segmentation module. This architecture integrates well mathematical modeling and data-driven approaches. Benefited from the embedded sparsity prior in DecNet, this architecture with any popular lightweight segmentation module can potentially achieve good performances stably in sparse feature segmentation problems. Experiments and comparisons on DRIVE, CHASE and CRACK datasets demonstrated such advantages.\nBecause of the efficiency in learnable parameters and flexibility in module choice of our DecNet+ framework, we can extend this network design strategy to more general applications. First, it is not difficult to extend our DecNet+ framework to 3D case for volumetric data segmentation, by properly adapting the 2D convolutions to 3D. Second, our proposed DecNet+ framework is for general sparse feature segmentation, and it is worthy to add application-specific constraints to it for concrete applications like topology and connectivity constraints for tubular structure segmentation problems." | |
| } | |
| ], | |
| "appendix": [], | |
| "tables": { | |
| "1": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Initialization of learnable parameters in DecNet for different datasets.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.4.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.4.5.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.4.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.3.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.4.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.7.5.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.5.2.1\">DRIVE/CHASE</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.7.5.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.7.5.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.006</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.7.5.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.003</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.7.5.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.8.6.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.6.2.1\">CRACK</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.8.6.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.8.6.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.006</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.8.6.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.02</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.8.6.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">5</td>\n</tr>\n</tbody>\n</table>\n</figure>", | |
| "capture": "TABLE I: Initialization of learnable parameters in DecNet for different datasets." | |
| }, | |
| "2": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Experiments on the influence of different combinations of hyperparameters , and in terms of AUC for DecNet+(CNN) on DRIVE.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.34\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.12.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.12.2.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S4.T2.12.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=8,=3</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.15.5\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.15.5.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.13.3.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.14.4.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.15.5.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.18.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.18.8.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">AUC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.16.6.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98620.0035</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.17.7.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98670.0024</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.18.8.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98630.0056</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.20.10\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.20.10.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T2.20.10.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=8,=2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.23.13\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.23.13.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.21.11.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.22.12.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.23.13.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.26.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.26.16.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">AUC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.24.14.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98570.0049</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.25.15.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98620.0035</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.26.16.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98610.0033</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.28.18\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.28.18.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T2.28.18.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=2, =3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.31.21\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.31.21.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.29.19.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.30.20.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.31.21.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n=10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.34.24\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.34.24.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">AUC</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.32.22.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98590.0056</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.33.23.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98620.0035</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.34.24.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98570.0032</td>\n</tr>\n</tbody>\n</table>\n</figure>", | |
| "capture": "TABLE II: Experiments on the influence of different combinations of hyperparameters , and in terms of AUC for DecNet+(CNN) on DRIVE." | |
| }, | |
| "3": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Inference results (AUC) by different segmentation networks on three datasets. The DecNet+ architecture with a small-scale segmentation module performs better than other segmentation networks in most cases, while introducing few extra learnable parameters over its segmentation module.The symbol \u201c-\u201d means that the model has not generated reasonable results under current settings, and hence no score is reported; See Fig. <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.02690v2#S4.F5\" title=\"Figure 5 \u2023 IV-F Experiments on the benefits by \u2113__\u20621DecNet feature extraction for \u2113__\u20621DecNet+ segmentation \u2023 IV Experiments \u2023 \u2113__\u20621DecNet+: A new architecture framework by \u2113__\u20621 decomposition and iteration unfolding for sparse feature segmentation\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.79\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.10.8\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.10.8.9\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(CNN)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(CNN)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.6.4.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.6.4.4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.5.3.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.3.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.4.4.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.6.4.4.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(CNN)</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.7.5.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res34)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.8.6.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.8.8\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.10.8.8.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.9.7.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.9.7.7.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.10.8.8.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.10.8.8.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.79.78.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.79.78.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">#Param.</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.78.1.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.9m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.78.1.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">176k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.79.78.1.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.5k+176k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.78.1.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">21.5m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.78.1.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">11.4m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.78.1.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.5k+11.4m</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.16.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.16.14.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">DRIVE(AUC)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.11.9.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97220.0122</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.12.10.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97410.0069</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.13.11.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.13.11.3.1\">\\ul</span>0.98620.0035</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.14.12.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.96740.0094</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.15.13.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.96800.0088</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.16.14.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.16.14.6.1\">\\ul</span>0.98510.0040</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.22.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.22.20.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">CHASE(AUC)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.17.15.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97440.0073</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.16.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97190.0061</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.19.17.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.19.17.3.1\">\\ul</span>0.98470.0056</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.20.18.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98540.0038</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.21.19.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98450.0034</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.22.20.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.22.20.6.1\">\\ul</span>0.98920.0024</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.28.26\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.28.26.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">CRACK(AUC)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.23.21.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.23.21.1.1\">\\ul</span>0.99050.0101</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.24.22.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98590.0138</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.25.23.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98740.0139</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.26.24.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.26.24.4.1\">\\ul</span>0.99080.0088</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.27.25.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98930.0108</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.28.26.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.99050.0104</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.36.34\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.36.34.9\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.29.27.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Effb2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.30.28.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Effb0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.32.30.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.32.30.4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.31.29.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.31.29.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.32.30.4.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.32.30.4.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Effb0)</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.33.31.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Mitb1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.34.32.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Mitb0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.36.34.8\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.36.34.8.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.35.33.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.35.33.7.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.36.34.8.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.36.34.8.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Mitb0)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.79.79.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.79.79.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">#Param.</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.79.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">7.82m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.79.2.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">4.1m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.79.79.2.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.5k+4.1m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.79.2.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">11.4m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.79.2.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">3.4m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.79.2.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.5k+3.4m</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.42.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.42.40.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">DRIVE(AUC)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.37.35.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97560.0074</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.38.36.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97720.0074</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.39.37.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.39.37.3.1\">\\ul</span>0.98610.0039</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.40.38.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98020.0067</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.41.39.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98060.0054</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.42.40.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.42.40.6.1\">\\ul</span>0.98320.0045</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.48.46\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.48.46.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">CHASE(AUC)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.43.41.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98440.0045</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.44.42.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98600.0030</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.45.43.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.45.43.3.1\">\\ul</span>0.98850.0038</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.46.44.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98470.0034</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.47.45.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98290.0040</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.48.46.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.48.46.6.1\">\\ul</span>0.98850.0024</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.53.51\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.53.51.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">CRACK(AUC)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.49.47.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.99040.0116</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.50.48.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98950.0102</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.51.49.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.51.49.3.1\">\\ul</span>0.99070.0137</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.53.51.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.52.50.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98950.0113</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.53.51.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.53.51.5.1\">\\ul</span>0.99000.0111</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.61.59\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.61.59.9\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.54.52.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res34)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.55.53.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.57.55.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.57.55.4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.56.54.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.56.54.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.57.55.4.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.57.55.4.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.58.56.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res34)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.59.57.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.61.59.8\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.61.59.8.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.60.58.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.60.58.7.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.61.59.8.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.61.59.8.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.79.80.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.79.80.3.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">#Param.</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.80.3.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">21.8m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.80.3.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">11.7m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.79.80.3.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.5k+11.7m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.80.3.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">21.5m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.80.3.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">11.4m</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.79.80.3.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.5k+11.4m</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.67.65\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.67.65.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">DRIVE(AUC)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.62.60.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97330.0075</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.63.61.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97460.0075</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.64.62.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.64.62.3.1\">\\ul</span>0.98510.0040</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.65.63.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.97390.0071</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.66.64.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.96810.0096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.67.65.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.67.65.6.1\">\\ul</span>0.98600.0035</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.73.71\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.73.71.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">CHASE(AUC)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.68.66.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98550.0032</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.69.67.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98410.0028</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.70.68.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.70.68.3.1\">\\ul</span>0.98720.0032</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.71.69.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98630.0033</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.72.70.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98470.0029</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.73.71.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.73.71.6.1\">\\ul</span>0.98930.0029</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.79.77\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.79.77.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">CRACK(AUC)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.74.72.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98790.0119</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.75.73.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98760.0153</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.76.74.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.76.74.3.1\">\\ul</span>0.98850.0148</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.77.75.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T3.77.75.4.1\">\\ul</span>0.99000.0128</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.78.76.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98790.0116</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.79.77.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.98870.0122</td>\n</tr>\n</tbody>\n</table>\n</figure>", | |
| "capture": "TABLE III: Inference results (AUC) by different segmentation networks on three datasets. The DecNet+ architecture with a small-scale segmentation module performs better than other segmentation networks in most cases, while introducing few extra learnable parameters over its segmentation module.The symbol \u201c-\u201d means that the model has not generated reasonable results under current settings, and hence no score is reported; See Fig. 5." | |
| }, | |
| "4": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Training, inference and storage costs in terms of time per backpropagation step (time per bp step), forward MACs (fwd MACs) and disk occupation, of different segmentation networks tested on CRACK. In general, the DecNet+ architecture slightly increases the occupation in space and time over its segmentation module.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.26\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.10.8\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.10.8.9\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.3.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(CNN)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(CNN)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.6.4.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.6.4.4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.5.3.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.5.3.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.4.4.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.6.4.4.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(CNN)</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.7.5.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res34)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.8.6.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.10.8.8\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.10.8.8.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.9.7.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.9.7.7.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.10.8.8.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.10.8.8.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n</tr>\n</table>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.26.25.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.25.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">time per bp step</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.25.1.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.879 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.25.1.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.937 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.26.25.1.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.980 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.25.1.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.9408 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.25.1.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.9138 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.25.1.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.528 s</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.26.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.26.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">fwd MACs</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.26.2.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">276.82 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.26.2.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">47.82 G</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.26.26.2.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">55.21 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.26.2.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">96.54 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.26.2.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">74.79 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.26.2.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">82.19 G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.27.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.27.3.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">disk occupation</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.27.3.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">34,323 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.27.3.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2,161 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.26.27.3.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2,266 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.27.3.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">94,874 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.27.3.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">50,118 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.27.3.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">50,224 KB</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.18.16\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.18.16.9\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.11.9.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Effb2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.12.10.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Effb0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.14.12.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.14.12.4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.13.11.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.13.11.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.14.12.4.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.14.12.4.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Effb0)</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.15.13.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Mitb1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.16.14.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Mitb0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.18.16.8\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.18.16.8.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.17.15.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.17.15.7.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.18.16.8.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.18.16.8.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Mitb0)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.28.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.28.4.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">time per bp step</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.28.4.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.227 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.28.4.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.994 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.26.28.4.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.094 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.28.4.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.109 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.28.4.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.935 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.28.4.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.992 s</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.29.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.29.5.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">fwd MACs</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.29.5.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">38.95 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.29.5.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">36.41 G</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.26.29.5.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">43.81 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.29.5.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">62.01 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.29.5.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">22.15 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.29.5.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">29.55 G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.30.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.30.6.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">disk occupation</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.30.6.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">32,216 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.30.6.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">17,049 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.26.30.6.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">17,154 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.30.6.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">65,282 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.30.6.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">16,971 KB</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.30.6.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">17,037 KB</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.24\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.24.9\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.17.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res34)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.20.18.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.22.20.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.22.20.4.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.21.19.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.21.19.3.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.22.20.4.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.22.20.4.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.23.21.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res34)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.24.22.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.24.8\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.26.24.8.2\">\n<tr class=\"ltx_tr\" id=\"S4.T4.25.23.7.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.25.23.7.1.1.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\nDecNet+</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.24.8.2.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.26.24.8.2.2.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">\n(Res18)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.31.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.31.7.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">time per bp step</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.31.7.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.037 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.31.7.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.015 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.26.31.7.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">2.428 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.31.7.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.497 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.31.7.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">0.9828 s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.26.31.7.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">1.868 s</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.32.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.26.32.8.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">fwd MACs</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.32.8.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">81.22 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.32.8.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">59.48 G</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.26.32.8.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">66.88 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.32.8.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">115.45 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.32.8.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">93.70 G</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.26.32.8.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">101.10 G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.26.33.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.26.33.9.1\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">disk occupation</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.26.33.9.2\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">100,404 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.26.33.9.3\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">55,648 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.26.33.9.4\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">55,710 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.26.33.9.5\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">96,292 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.26.33.9.6\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">51,536 KB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.26.33.9.7\" style=\"padding-top:2.5pt;padding-bottom:2.5pt;\">51,598 KB</td>\n</tr>\n</tbody>\n</table>\n</figure>", | |
| "capture": "TABLE IV: Training, inference and storage costs in terms of time per backpropagation step (time per bp step), forward MACs (fwd MACs) and disk occupation, of different segmentation networks tested on CRACK. In general, the DecNet+ architecture slightly increases the occupation in space and time over its segmentation module." | |
| } | |
| }, | |
| "image_paths": { | |
| "1": { | |
| "figure_path": "2203.02690v2_figure_1.png", | |
| "caption": "Figure 1: Our \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+ architecture. The \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet with L\ud835\udc3fLitalic_L unfolding layers decomposes an input image f\ud835\udc53fitalic_f into a sparse feature v(L)superscript\ud835\udc63\ud835\udc3fv^{(L)}italic_v start_POSTSUPERSCRIPT ( italic_L ) end_POSTSUPERSCRIPT and a dense feature u(L)superscript\ud835\udc62\ud835\udc3fu^{(L)}italic_u start_POSTSUPERSCRIPT ( italic_L ) end_POSTSUPERSCRIPT, and the segmentation module operates over the sparse feature v(L)superscript\ud835\udc63\ud835\udc3fv^{(L)}italic_v start_POSTSUPERSCRIPT ( italic_L ) end_POSTSUPERSCRIPT for sparse feature segmentation.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/idmunet-overview.png" | |
| }, | |
| "2": { | |
| "figure_path": "2203.02690v2_figure_2.png", | |
| "caption": "Figure 3: The input f\ud835\udc53fitalic_f (column 1), zoom-in label (column 2) and zoom-in segmentation results from U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(CNN), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (column 3-5). Images and labels are chosen from the test subset of DRIVE, CHASE and CRACK (row 1-3).", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/seg-UCNN-group.png" | |
| }, | |
| "3(a)": { | |
| "figure_path": "2203.02690v2_figure_3(a).png", | |
| "caption": "(a) DRIVE\nFigure 4: Zoom-in of segmentation results on one test image by U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(CNN) (row 1), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (row 2) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (row 3) during training procedure on DRIVE (a), CHASE (b) and CRACK (c) datasets, respectively.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/train-seg-drive.png" | |
| }, | |
| "3(b)": { | |
| "figure_path": "2203.02690v2_figure_3(b).png", | |
| "caption": "(b) CHASE\nFigure 4: Zoom-in of segmentation results on one test image by U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(CNN) (row 1), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (row 2) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (row 3) during training procedure on DRIVE (a), CHASE (b) and CRACK (c) datasets, respectively.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/train-seg-chase.png" | |
| }, | |
| "3(c)": { | |
| "figure_path": "2203.02690v2_figure_3(c).png", | |
| "caption": "(c) CRACK\nFigure 4: Zoom-in of segmentation results on one test image by U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(CNN) (row 1), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (row 2) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) (row 3) during training procedure on DRIVE (a), CHASE (b) and CRACK (c) datasets, respectively.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/train-seg-crack.png" | |
| }, | |
| "4(a)": { | |
| "figure_path": "2203.02690v2_figure_4(a).png", | |
| "caption": "(a) DRIVE\nFigure 5: Zoom-in of segmentation results on one test image by U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(Mitb1) (row 1), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(Mitb0) (row 2) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(Mitb0) (row 3) during training procedure on DRIVE (a), CHASE (b) and CRACK (c) datasets, respectively.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/train-seg-drive-mit.png" | |
| }, | |
| "4(b)": { | |
| "figure_path": "2203.02690v2_figure_4(b).png", | |
| "caption": "(b) CHASE\nFigure 5: Zoom-in of segmentation results on one test image by U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(Mitb1) (row 1), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(Mitb0) (row 2) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(Mitb0) (row 3) during training procedure on DRIVE (a), CHASE (b) and CRACK (c) datasets, respectively.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/train-seg-chase-mit.png" | |
| }, | |
| "4(c)": { | |
| "figure_path": "2203.02690v2_figure_4(c).png", | |
| "caption": "(c) CRACK\nFigure 5: Zoom-in of segmentation results on one test image by U_\u20623,32subscriptU_332\\text{U}_{\\_}{3,32}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 3 , 32(Mitb1) (row 1), U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(Mitb0) (row 2) and \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(Mitb0) (row 3) during training procedure on DRIVE (a), CHASE (b) and CRACK (c) datasets, respectively.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/train-seg-crack-mit.png" | |
| }, | |
| "5": { | |
| "figure_path": "2203.02690v2_figure_5.png", | |
| "caption": "Figure 6: Histograms in blue of the gray intensities of the random 8 patches of the first test image from DRIVE, CHASE and CRACK (row 1,3,5), and histograms of their feature v\ud835\udc63vitalic_v by trained \u2113_\u20621subscript\u2113_1\\ell_{\\_}1roman_\u2113 start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 1DecNet+U_\u20622,16subscriptU_216\\text{U}_{\\_}{2,16}U start_POSTSUBSCRIPT _ end_POSTSUBSCRIPT 2 , 16(CNN) with Laplacian fitting curves in red (row 2,4,6). They are all normalized to the same height.", | |
| "url": "http://arxiv.org/html/2203.02690v2/extracted/5669903/images/hist-3x-box.png" | |
| } | |
| }, | |
| "validation": true, | |
| "references": [], | |
| "url": "http://arxiv.org/html/2203.02690v2" | |
| } |