Akagi, T., Onishi, M., Masuda, K., Kuroki, R., Baba, K., Takeshita, K., et al.
(2020) Explainable deep learning reproduces a ‘professional eye’ on the
diagnosis of internal disorders in persimmon fruit. Plant Cell Physiol. 61:
1967–1973.
Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., Samek, W.,
et al. (2015) On pixel-wise explanations for non-linear classifier decisions
by layer-wise relevance propagation. PLoS One. 10: e0130140.
Bowman, J.L., Arteaga-Vazquez, M., Berger, F., Briginshaw, L.N., Carella,
P., Aguilar-Cruz, A., et al. (2022) The Renaissance and enlightenment
of Marchantia as a model system. Plant Cell 34: 3512–3542.
Bowman, J.L., Kohchi, T., Yamato, K.T., Jenkins, J., Shu, S., Ishizaki, K., et al.
(2017) Insights into land plant evolution garnered from the Marchantia
polymorpha genome. Cell 171: 287–304.e15.
Chen, S., Zhou, Y., Chen, Y. and Gu, J. (2018) fastp: an ultra-fast all-in-one
FASTQ preprocessor. Bioinformatics 34: i884–i890.
Chitwood, D.H. and Sinha, N.R. (2016) Evolutionary and environmental
forces sculpting leaf development. Curr. Biol. 26: R297–R306.
Danecek, P., Bonfield, J.K., Liddle, J., Marshall, J., Ohan, V., Pollard, M.O.,
et al. (2021) Twelve years of SAMtools and BCFtools. GigaScience 10:
giab008.
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E. et al.
(2014) DeCAF: a deep convolutional activation feature for generic
visual recognition. In Proceedings of the 31st International Conference on Machine Learning. Edited by Xing, E.P. and Jebara, T. pp.
647–655. PMLR (Proceedings of Machine Learning Research), Beijing,
China.
Flores-Sandoval, E., Eklund, D.M., Bowman, J.L. and Bomblies, K. (2015) A
simple auxin transcriptional response system regulates multiple morphogenetic processes in the liverwort Marchantia polymorpha. PLoS
Genet. 11: e1005207.
Gamborg, O.L., Miller, R.A. and Ojima, K. (1968) Nutrient requirements
of suspension cultures of soybean root cells. Exp. Cell Res. 50: 151–158.
Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M.,
et al. (2020) Shortcut learning in deep neural networks. Nat. Mach. Intell.
2: 665–673.
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A. and
Brendel, W. (2019) ImageNet-trained CNNs are biased towards texture;
increasing shape bias improves accuracy and robustness. In Proceedings
of Innternational Conference on Learning Representations (ICLR) 2019.
New Orleans, LA.
Gurevitch, J. and Hedges, L. (1993) Meta-analysis: combining the results
of independent experiments. In Design and Analysis of Ecological Experiments. Edited by Scheiner, S.M. and Gurevitch, J. pp. 378–398. Chapman
and Hall, New York.
Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., et al.
(2022) Scaling out-of-distribution detection for real-world settings. In
Proceedings of the 39th International Conference on Machine Learning.
Baltimore, MD.
He, K., Zhang, X., Ren, S. and Sun, J. (2016) Deep residual learning for image
recognition. In 2016 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR). Las Vegas, NV. pp. 770–778.
Downloaded from https://academic.oup.com/pcp/article/64/11/1343/7292451 by Kyoto University user on 25 January 2024
heatmaps for correctly predicted test images for each accession/sex/developmental day, we selected a representative heatmap for the image of the highest
value of the unnormalized logit, i.e. the raw output of the last fully connected layer in ResNet50. The logit values were used to quantify the degree
of representativeness of the input images, following recent studies on out-ofdistribution detection in deep learning models (Hendrycks et al. 2022; Vaze et al.
2022).
We also adopted XRAI (Kapishnikov et al. 2019) to obtain fine-grained
visualization of relevant regions for classification. We modified the original
XRAI implementation in TensorFlow (https://github.com/PAIR-code/saliency,
version 0.2.0) to analyze our PyTorch models. The selection criteria for representative heatmaps were the same as those for Grad-CAM.
We used IoU, also known as the Jaccard coefficient, in order to measure the degree of overlap between the aerial part of gemmalings and the
Grad-CAM/XRAI heatmaps. To this end, we binarized the normalized GradCAM/XRAI heatmaps with a threshold of 0.5. For the gemmalings, we utilized
the silhouette images (Fig. 5). For each binarized image and the corresponding
heatmap, IoU is defined as the area of intersection between the image and the
heatmap normalized by the area of union of the image and the heatmap.
Plant Cell Physiol. 64(11): 1343–1355 (2023) doi:https://doi.org/10.1093/pcp/pcad117
and visual attributes. In 2022 IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR). New Orleans, LA, pp. 19087–19097.
Ohashi, K., Makino, T.T., Arikawa, K. and Kudo, G. (2015) Floral colour
change in the eyes of pollinators: testing possible constraints and correlated evolution. Funct. Ecol. 29: 1144–1155.
Quinlan, A.R. and Hall, I.M. (2010) BEDTools: a flexible suite of utilities
for comparing genomic features. Bioinformatics 26: 841–842.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2015)
ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115:
211–252.
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D. and Batra, D.
(2017) Grad-CAM: visual explanations from deep networks via gradientbased localization. In 2017 IEEE International Conference on Computer
Vision (ICCV). Venice, Italy, pp. 618–626.
Sharif Razavian, A., Azizpour, H., Sullivan, J. and Carlsson, S. (2014) CNN features off-the-shelf : an astounding baseline for recognition. In Computer
Vision and Pattern Recognition (CVPR) 2014, DeepVision Workshop,
June 28, 2014. Columbus, Ohio.
Shimamura, M. (2016) Marchantia polymorpha: taxonomy phylogeny
and morphology of a model system. Plant Cell Physiol. 57: 230–256.
Singh, A.K., Ganapathysubramanian, B., Sarkar, S. and Singh, A. (2018) Deep
learning for plant stress phenotyping: trends and future perspectives.
Trends Plant Sci. 23: 883–898.
Solly, J.E., Cunniffe, N.J. and Harrison, C.J. (2017) Regional growth rate differences specified by apical notch activities regulate liverwort thallus shape.
Curr. Biol. 27: 16–26.
Vasimuddin, M., Misra, S., Li, H. and Aluru, S. (2019) Efficient architectureaware acceleration of BWA-MEM for multicore systems. In 2019 IEEE
International Parallel and Distributed Processing Symposium (IPDPS).
Rio de Janeiro, Brazil, pp. 314–324.
Vaze, S., Han, K., Vedaldi, A., and Zisserman, A. (2022) Open-set recognition: a good closed-set classifier is all you need? In Proceedings of International Conference on Learning Representations (ICLR) 2022. Virtual
Event, USA.
Yu, Y., Ouyang, Y., Yao, W. and Hancock, J. (2018) shinyCircos: an R/Shiny
application for interactive creation of Circos plot. Bioinformatics 34:
1229–1231.
Downloaded from https://academic.oup.com/pcp/article/64/11/1343/7292451 by Kyoto University user on 25 January 2024
Ishizaki, K., Chiyoda, S., Yamato, K.T. and Kohchi, T. (2008) Agrobacteriummediated transformation of the haploid liverwort Marchantia polymorpha L., an emerging model for plant biology. Plant Cell Physiol. 49:
1084–1091.
Iwasaki, M., Kajiwara, T., Yasui, Y., Yoshitake, Y., Miyazaki, M., Kawamura, S.,
et al. (2021) Identification of the sex-determining factor in the liverwort
Marchantia polymorpha reveals unique evolution of sex chromosomes
in a haploid system. Curr. Biol. 31: 5522–5532.e7.
Kapishnikov, A., Bolukbasi, T., Viégas, F. and Terry, M. (2019) XRAI:
better attributions through regions. In 2019 IEEE/CVF International
Conference on Computer Vision (ICCV). Seoul, Korea (South), pp.
4947–4956.
Khorram, S., Lawson, T. and Fuxin, L. (2021) iGOS++: integrated
gradient optimized saliency by bilateral perturbations. In Proceedings of the Conference on Health, Inference, and Learning.
pp. 174–182. Association for Computing Machinery (CHIL’21),
New York, NY.
Kohchi, T., Yamato, K.T., Ishizaki, K., Yamaoka, S. and Nishihama, R. (2021)
Development and molecular genetics of Marchantia polymorpha. Annu.
Rev. Plant Biol. 72: 677–702.
Kubilius, J., Bracci, S. and Op de Beeck, H.P. (2016) Deep neural networks as
a computational model for human shape sensitivity. PLoS Comput. Biol.
12: e1004896.
Kutsuna, N., Higaki, T., Matsunaga, S., Otsuki, T., Yamaguchi, M., Fujii,
H., et al. (2012) Active learning framework with iterative clustering
for bioimage classification. Nat. Commun. 3: 1032.
Li, H., Handsaker, B., Wysoker, A., Fennell, T., Ruan, J., Homer, N., et al. (2009)
The sequence alignment/map format and SAMtools. Bioinformatics 25:
2078–2079.
Maaten, L.V.D. and Hinton, G.E. (2008) Visualizing data using t-SNE. J. Mach.
Learn. Res. 9: 2579–2605.
McKenna, A., Hanna, M., Banks, E., Sivachenko, A., Cibulskis, K., Kernytsky,
A., et al. (2010) The Genome Analysis Toolkit: a MapReduce framework
for analyzing next-generation DNA sequencing data. Genome Res. 20:
1297–1303.
Moayeri, M., Pope, P., Balaji, Y. and Feizi, S. (2022) A comprehensive study
of image classification model sensitivity to foregrounds, backgrounds,
1355
...