リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

大学・研究所にある論文を検索できる 「機械学習と量子コンピュータによる分子シミュレーションを用いた研究開発の高効率化 (本文)」の論文概要。リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

コピーが完了しました

URLをコピーしました

論文の公開元へ論文の公開元へ
書き出し

機械学習と量子コンピュータによる分子シミュレーションを用いた研究開発の高効率化 (本文)

遠藤, 克浩 慶應義塾大学

2022.03.23

概要

分子シミュレーションは,分子の振る舞いを調べるための汎用的かつ強力な手法である.ポリマーやタンパク質などの高分子化合物[1,2],メタンハイドレードや水蒸気の核生成[3,4],カーボンナノチューブやナノスリット内のコンファインド系[5,6]など,様々な物性を解明にするために分子シミュレーションの応用が広がっている.

 分子シミュレーションは計算方法の違いから分子動力学(MD)シミュレーションとモンテカルロ(MC)シミュレーションの2つに大別される.MDシミュレーションは,Newtonの運動方程式に従って原子・分子の時間発展を計算することによりシミュレーションを行う手法である.一方,MCシミュレーションは,温度や圧力などの指定したマクロ条件における尤もらしい原子・分子配置を,統計力学に基づいて確率的にサンプリングすることによりシミュレーションを行う手法である.

 MDシミュレーションは,時間発展を計算するため,分子が時間発展に伴って辿る過程を調べることができるという利点がある.その一方,MCシミュレーションは,分子の時間発展とは無関係に系の取り得る状態を列挙できるため,MDシミュレーションより高速に実行できる可能性があるという利点がある.しかしながら,適用範囲の拡大により,MDシミュレーション・MCシミュレーション双方にて以下に述べる解決すべき課題がある.

 まず,MDシミュレーションにおいては,近年になって巨大な系や長時間のシミュレーションが必要な系に対しても適用されており,系の大きさは約109粒子[7],シミュレーションの長さは約1010時間ステップ[8]にも達している.このようなシミュレーションには膨大な計算資源が必要となってしまう.このため計算時間を可能な限り小さくするために,租視化による粒子数の削減[9,10]やMDシミュレーションを高速に計算する専用ハードウェアの開発[11]など,今に至るまで多大な努力がなされている.特に,空間方向には領域分割によって並列計算が可能である[12]が,時間方向には演算の逐次性から単純な並列化ができないため,長時間のシミュレーションを実行することは,粒子数が多い系のシミュレーションと比べ難しい.

 また,MDシミュレーション実行後に出力されるデータは単なる大量の分子配置の時系列であり,そこから有意義な知見を取り出すためには専門家の知識に基づく解析が必要である.専門家は,異なる系や,系内の性質が異なる部分の違いを,解析したい対象に合わせて相関係数やオーダーパラメータを設計することによって見つけだす.これは分子のダイナミクスが持つ情報をそぎ落として見たいものだけを見ていることにほかならず,シミュレーションデータが本来持つ情報をほとんど活用できていないことになる.

 さらに,MCシミュレーションにおいては,理論上時間発展とは無関係に系の取り得る状態を列挙できるとしたが,実用的には大きな問題がある.MCシミュレーションは統計力学に基づいた分子配置の確率分布であるボルツマン分布をサンプリングすることによってシミュレーションを行うが,ボルツマン分布は局所的にも大局的にも変化の激しい構造をしている分布である.そのため,ボルツマン分布の効率的なサンプリングが難しい.

 一方,近年,上記のような分子シミュレーションの課題の解決策として,機械学習や量子コンピュータを応用した手法が注目を集めている.前者の機械学習は,与えられたデータから有用な情報を自動的に抽出し,何らかの値を予測するものである.機械学習を分子シミュレーションに適用した手法の例としては,MDシミュレーションの相互作用計算の際に,高コストだが正確な相互作用を計算できる量子化学計算を,高速に計算可能な機械学習モデルに代替する手法[13,14,15]や,タンパク質等の反応経路を機械学習を用いて低次元に落とし込むことによって,複雑な反応経路を可視化する手法[16,17],MCシミュレーション結果から自由エネルギー曲面を機械学習によって効率的に表現する手法[18,19],MCシミュレーションがサンプリングする確率分布を機械学習を用いて潜在空間とよばれる低次元の空間に変換することにより,サンプリングしやすいフラットな分布に変換する手法[20],など様々な手法が提案されている.

 また,後者の量子コンピュータは,量子力学から導かれる量子状態の重ね合わせや確率的測定といった性質を利用して,従来の古典コンピュータでは実現不可能な計算が可能とするものである.分子シミュレーションに関連する量子コンピュータの応用例としては,主に量子化学計算[21,22,23]が挙げられる.量子化学計算は分子シミュレーションの計算に用いられる原子間のポテンシャルエネルギーを決定する.量子化学計算においては,量子状態の重ね合わせという性質から,系のサイズに対して計算すべき相互作用の組み合わせが指数的に膨大してしまうため,古典コンピュータを用いた計算による計算が非現実的となってしまう.そこで,量子状態の計算には同じく量子状態を用いた計算を活用することで,計算を可能にする.また,量子科学計算の他にも,量子が本質的に確率論的な振る舞いをすることを利用して,古典的にサンプリングが難しい分布を高速に実行する研究[24]がされており,分子シミュレーションへの応用が期待される.

 このように機械学習・量子コンピュータの分子シミュレーション分野への適用は,まさに今,応用範囲が広がっているものであり,機械学習・量子コンピュータを用いることで上述の課題解決に貢献できる可能性がある.

参考文献

[1] Richard H Gee, Naida Lacevic, and Laurence E Fried. Atomistic simulations of spinodal phase separation preceding polymer crystallization. Nature materials, Vol. 5, No. 1, pp. 39–43, 2006.

[2] Kresten Lindorff-Larsen, Stefano Piana, Ron O Dror, and David E Shaw. How fast-folding proteins fold. Science, Vol. 334, No. 6055, pp. 517–520, 2011.

[3] Matthew R Walsh, Carolyn A Koh, E Dendy Sloan, Amadeu K Sum, and David T Wu. Microsecond simulations of spontaneous methane hydrate nucleation and growth. Science, Vol. 326, No. 5956, pp. 1095–1098, 2009.

[4] Kenji Yasuoka and Mitsuhiro Matsumoto. Molecular dynamics simulation of homogeneous nucleation in supersaturated water vapor. Fluid phase equilibria, Vol. 144, No. 1-2, pp. 369–376, 1998.

[5] S B Legoas, V R Coluci, S F Braga, P Z Coura, S O Dantas, and Douglas S Galvao. Molecular-dynamics simulations of carbon nanotubes as gigahertz oscil- lators. Physical review letters, Vol. 90, No. 5, p. 055504, 2003.

[6] Qingwei Gao, Yudan Zhu, Yang Ruan, Yumeng Zhang, Wei Zhu, Xiaohua Lu, and Linghong Lu. Effect of adsorbed alcohol layers on the behavior of water molecules confined in a graphene nanoslit: a molecular dynamics study. Langmuir, Vol. 33, No. 42, pp. 11467–11474, 2017.

[7] Yasushi Shibuta, Shinji Sakane, Eisuke Miyoshi, Shin Okita, Tomohiro Takaki, and Munekazu Ohno. Heterogeneity in homogeneous nucleation from billion-atom molecular dynamics simulation of solidification of pure metal. Nature communi- cations, Vol. 8, No. 1, pp. 1–9, 2017.

[8] Sarah R Needham, Selene K Roberts, Anton Arkhipov, Venkatesh P Mysore, Christopher J Tynan, Laura C Zanetti-Domingues, Eric T Kim, Valeria Losasso, Dimitrios Korovesis, Michael Hirsch, Daniel J Rolfe, David T Clarke, Martyn D Winn, Alireza Lajevardipour, Andrew H A Clayton, Linda J Pike, Michela Perani, Peter J Parker, Yibing Shan, David E Shaw, and Marisa L Martin- Fernandez. Egfr oligomerization organizes kinase-active dimers into competent signalling platforms. Nature communications, Vol. 7, No. 1, pp. 1–14, 2016.

[9] Takeshi Aoyagi, Fumio Sawa, Tatsuya Shoji, Hiroo Fukunaga, Jun-ichi Takimoto, and Masao Doi. A general-purpose coarse-grained molecular dynamics program. Computer physics communications, Vol. 145, No. 2, pp. 267–279, 2002.

[10] Soumil Y Joshi and Sanket A Deshmukh. A review of advancements in coarse- grained molecular dynamics simulations. Molecular simulation, Vol. 47, No. 10- 11, pp. 786–803, 2021.

[11] Tetsu Narumi, Yousuke Ohno, Noriaki Okimoto, Takahiro Koishi, Atsushi Sue- naga, Noriyuki Futatsugi, Ryoko Yanai, Ryutaro Himeno, Shigenori Fujikawa, Makoto Taiji, and Mitsuru Ikei. A 55 tflops simulation of amyloid-forming pep- tides from yeast prion sup35 with the special-purpose computer system mdgrape-3. In Proceedings of the 2006 ACM/IEEE conference on supercomputing, pp. 49–es, 2006.

[12] Sho Ayuba, Donguk Suh, Kentaro Nomura, Toshikazu Ebisuzaki, and Kenji Yasuoka. Kinetic analysis of homogeneous droplet nucleation using large-scale molecular dynamics simulations. The journal of chemical physics, Vol. 149, No. 4, p. 044504, 2018.

[13] J¨org Behler and Michele Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Physical review letters, Vol. 98, No. 14, p. 146401, 2007.

[14] Stefan Chmiela, Huziel E Sauceda, Klaus-Robert Mu¨ller, and Alexandre Tkatchenko. Towards exact molecular dynamics simulations with machine- learned force fields. Nature communications, Vol. 9, No. 1, pp. 1–10, 2018.

[15] Felix Brockherde, Leslie Vogt, Li Li, Mark E Tuckerman, Kieron Burke, and Klaus-Robert Mu¨ller. Bypassing the kohn-sham equations with machine learning. Nature communications, Vol. 8, No. 1, pp. 1–10, 2017.

[16] Christoph Wehmeyer and Frank No´e. Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. The journal of chemical physics, Vol. 148, No. 24, p. 241703, 2018.

[17] Andreas Mardt, Luca Pasquali, Hao Wu, and Frank No´e. Vampnets for deep learning of molecular kinetics. Nature communications, Vol. 9, No. 1, pp. 1–11, 2018.

[18] Jo˜ao M L Ribeiro, Pablo Bravo, Yihang Wang, and Pratyush Tiwary. Reweighted autoencoded variational bayes for enhanced sampling (rave). The journal of chemical physics, Vol. 149, No. 7, p. 072301, 2018.

[19] Thomas Stecher, Noam Bernstein, and G´abor Cs´anyi. Free energy surface re- construction from umbrella samples using gaussian process regression. Journal of chemical theory and computation, Vol. 10, No. 9, pp. 4079–4097, 2014.

[20] Frank No´e, Simon Olsson, Jonas K¨ohler, and Hao Wu. Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning. Science, Vol. 365, No. 6457, 2019.

[21] John Preskill. Quantum computing in the nisq era and beyond. Quantum, Vol. 2, p. 79, 2018.

[22] Bryan T Gard, Linghua Zhu, George S Barron, Nicholas J Mayhall, Sophia E Economou, and Edwin Barnes. Efficient symmetry-preserving state preparation circuits for the variational quantum eigensolver algorithm. npj quantum infor- mation, Vol. 6, No. 1, pp. 1–9, 2020.

[23] Panagiotis K Barkoutsos, Jerome F Gonthier, Igor Sokolov, Nikolaj Moll, Gian Salis, Andreas Fuhrer, Marc Ganzhorn, Daniel J Egger, Matthias Troyer, Anto- nio Mezzacapo, et al. Quantum algorithms for electronic structure calculations: Particle-hole hamiltonian and optimized wave-function expansions. Physical re- view A, Vol. 98, No. 2, p. 022322, 2018.

[24] Marco Bentivegna, Nicol`o Spagnolo, Chiara Vitelli, Fulvio Flamini, Niko Vig- gianiello, Ludovico Latmiral, Paolo Mataloni, Daniel J Brod, Ernesto F Galv˜ao, Andrea Crespi, Roberta Ramponi, Roberto Osellameand, and Fabio Sciarrino. Experimental scattershot boson sampling. Science advances, Vol. 1, No. 3, p. e1400255, 2015.

[25] 渡辺澄夫. ベイズ統計の理論と方法. コロナ社, 2012.

[26] Svetlozar Todorov Rachev, R M. Duality theorems for kantorovich-rubinstein and wasserstein functionals. Warszawa: Instytut Matematyczny Polskiej Akademi Nauk, 1990.

[27] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.

[28] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.

[29] Ian Goodfellow, Jean Pouget-Abadie, and Mehdi Mirza. Generative adversarial nets, Advances in neural information processing systems 27. pp. 2672–2680, 2014.

[30] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.

[31] Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214–223. PMLR, 2017.

[32] Casey Chu, Kentaro Minami, and Kenji Fukumizu. Smoothness and stability in gans. arXiv preprint arXiv:2002.04185, 2020.

[33] Naveen Kodali, Jacob Abernethy, James Hays, and Zsolt Kira. On convergence and stability of gans. arXiv preprint arXiv:1705.07215, 2017.

[34] Behnam Neyshabur, Srinadh Bhojanapalli, and Ayan Chakrabarti. Stabilizing gan training with multiple random projections. arXiv preprint arXiv:1705.07831, 2017.

[35] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.

[36] Ricky T Q Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. arXiv preprint arXiv:1806.07366, 2018.

[37] Daan Frenkel, Berend Smit, and Mark A Ratner. Understanding molecular simu- lation: from algorithms to applications, Vol. 2. Academic press San Diego, 1996.

[38] Keith W Hastings. Monte carlo sampling methods using markov chains and their applications. Oxford University Press, 1970.

[39] Jascha Sohl-Dickstein, Mayur Mudigonda, and Michael DeWeese. Hamiltonian monte carlo without detailed balance. In International conference on machine learning, pp. 719–726. PMLR, 2014.

[40] Hidemaro Suwa and Synge Todo. Markov chain monte carlo method without detailed balance. Physical review letters, Vol. 105, No. 12, p. 120603, 2010.

[41] Hiroaki Nishizawa and Hisashi Okumura. Comparison of replica-permutation molecular dynamics simulations with and without detailed balance condition. Journal of the physical society of japan, Vol. 84, No. 7, p. 074801, 2015.

[42] Jean-Paul Ryckaert, Giovanni Ciccotti, and Herman J Berendsen. Numerical integration of the cartesian equations of motion of a system with constraints: molecular dynamics of n-alkanes. Journal of computational physics, Vol. 23, No. 3, pp. 327–341, 1977.

[43] Berk Hess, Henk Bekker, Herman J Berendsen, and Johannes G Fraaije. LINCS: a linear constraint solver for molecular simulations. Journal of computational chemistry, Vol. 18, No. 12, pp. 1463–1472, 1997.

[44] John E Lennard-Jones. Cohesion. Proceedings of the physical society (1926-1948), Vol. 43, No. 5, p. 461, 1931.

[45] Hendrik A Lorentz. Ueber die anwendung des satzes vom virial in der kinetischen theorie der gase. Annalen der physik, Vol. 248, No. 1, pp. 127–136, 1881.

[46] Daniel Berthelot. Sur le m´elange des gaz. Comptes rendus, Vol. 126, pp. 1703– 1706, 1898.

[47] Shuichi Nos´e. A unified formulation of the constant temperature molecular dy- namics methods. The journal of chemical physics, Vol. 81, No. 1, pp. 511–519, 1984.

[48] William G Hoover. Canonical dynamics: Equilibrium phase-space distributions. Physical review A, Vol. 31, No. 3, p. 1695, 1985.

[49] Hans C Andersen. Molecular dynamics simulations at constant pressure and/or temperature. The journal of chemical physics, Vol. 72, No. 4, pp. 2384–2393, 1980.

[50] Michele Parrinello and Aneesur Rahman. Crystal structure and pair potentials: A molecular-dynamics study. Physical review letters, Vol. 45, No. 14, p. 1196, 1980.

[51] Jacques-Louis Lions, Yvon Maday, and Gabriel Turinici. R´esolution d’edp par un sch´ema en temps parar´eel. Comptes rendus series I mathematics, Vol. 332, No. 7, pp. 661–668, 2001.

[52] L’ubor Ladicky`, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross. Data-driven fluid simulations using regression forests. ACM transactions on graphics, Vol. 34, No. 6, pp. 1–9, 2015.

[53] Julia Ling, Andrew Kurzawski, and Jeremy Templeton. Reynolds averaged tur- bulence modelling using deep neural networks with embedded invariance. Journal of fluid mechanics, Vol. 807, pp. 155–166, 2016.

[54] Jiang Wang, Simon Olsson, Christoph Wehmeyer, Adri`a P´erez, Nicholas E Char- ron, Gianni De Fabritiis, Frank No´e, and Cecilia Clementi. Machine learning of coarse-grained molecular dynamics force fields. ACS central science, Vol. 5, No. 5, pp. 755–767, 2019.

[55] So Takamoto, Chikashi Shinagawa, Daisuke Motoki, Kosuke Nakago, Wenwen Li, Iori Kurata, Taku Watanabe, Yoshihiro Yayama, Hiroki Iriguchi, Yusuke Asano, Tasuku Onodera, Takafumi Ishii, Takao Kudo, Hideki Ono, Ryohto Sawada, Ryuichiro Ishitani, Marc Ong, Taiki Yamaguchi, Toshiki Kataoka, Ak- ihide Hayashi, and Takeshi Ibuka. PFP: Universal neural network potential for material discovery. arXiv preprint arXiv:2106.14583, 2021.

[56] Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.

[57] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

[58] Roderic Lakes and Roderic S Lakes. Viscoelastic materials. Cambridge university press, 2009.

[59] Erik Lindahl, Berk Hess, and David Van Der Spoel. GROMACS 3.0: a package for molecular simulation and trajectory analysis. Molecular modeling annual, Vol. 7, No. 8, pp. 306–317, 2001.

[60] Katie A Maerzke, Nathan E Schultz, Richard B Ross, and J Ilja Siepmann. Trappe-ua force field for acrylates and monte carlo simulations for their mixtures with alkanes and alcohols. The journal of physical chemistry B, Vol. 113, No. 18, pp. 6415–6425, 2009.

[61] Ryo Kawada, Katsuhiro Endo, Daisuke Yuhara, and Kenji Yasuoka. MD-GAN with multi-particle input: the machine learning of long-time molecular behavior from short-time MD data arXiv preprint arXiv:2202.00995, 2022.

[62] Rossend Rey, Klaus B Møller, and James T Hynes. Hydrogen bond dynamics in water and ultrafast infrared spectroscopy. The journal of physical chemistry A, Vol. 106, No. 50, pp. 11993–11996, 2002.

[63] Biman Bagchi. Water dynamics in the hydration layer around proteins and micelles. Chemical reviews, Vol. 105, No. 9, pp. 3197–3219, 2005.

[64] Damien Laage, Guillaume Stirnemann, Fabio Sterpone, Rossend Rey, and James T Hynes. Reorientation and allied dynamics in water and aqueous so- lutions. Annual review of physical chemistry, Vol. 62, pp. 395–416, 2011.

[65] Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul Von Buenau, and Motoaki Kawanabe. Direct importance estimation with model selection and its application to covariate shift adaptation. In Advances in neural information processing systems 20, Vol. 7, pp. 1433–1440, 2007.

[66] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Density ratio estima- tion: A comprehensive review (statistical experiment and its related topics). 数理解析研究所講究録, Vol. 1703, pp. 10–31, 2010.

[67] Gabriel Peyr´e, Marco Cuturi. Computational optimal transport: With applica- tions to data science. Foundations and trends® in machine learning, Vol. 11, No. 5-6, pp. 355–607, 2019.

[68] Peter Dunn-Rankin, Gerald A Knezek, Susan R Wallace, and Shuqiang Zhang. Scaling methods. Psychology press, 2014.

[69] Jan De Leeuw and Patrick Mair. Multidimensional scaling using majorization: Smacof in R. Journal of statistical software, Vol. 31, pp. 1–30, 2009.

[70] Paul S Nerenberg, Brian Jo, Clare So, Ajay Tripathy, and Teresa Head-Gordon. Optimizing solute–water van der waals interactions to reproduce solvation free energies. The journal of physical chemistry B, Vol. 116, No. 15, pp. 4524–4534, 2012.

[71] Dail E Chapman, Jonathan K Steck, and Paul S Nerenberg. Optimizing protein– protein van der waals interactions for the amber ff9x/ff12 force field. Journal of chemical theory and computation, Vol. 10, No. 1, pp. 273–281, 2014.

[72] Hans W Horn, William C Swope, Jed W Pitera, Jeffry D Madura, Thomas J Dick, Greg L Hura, and Teresa Head-Gordon Development of an improved four- site water model for biomolecular simulations: TIP4P-Ew. Journal of chemical physics, Vol. 120, pp. 9665–9678, 2004.

[73] Anselm H C Horn. A consistent force field parameter set for zwitterionic amino acid residues. Journal of molecular modeling, Vol. 20, No. 11, pp. 1–14, 2014.

[74] Ikki Yasuda, Katsuhiro Endo, Eiji Yamamoto, Yoshinori Hirano, and Kenji Ya- suoka. Ligand-induced protein dynamics differences correlate with protein-ligand binding affinities: An unsupervised deep learning approach. arXiv preprint arXiv:2109.01339, 2021.

[75] Lars Ruthotto and Eldad Haber. Deep neural networks motivated by partial differential equations. Journal of mathematical imaging and vision, Vol. 62, No. 3, pp. 352–364, 2020.

[76] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mo- hamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic mod- eling and inference. arXiv preprint arXiv:1912.02762, 2019.

[77] Samuel J Greydanus, Misko Dzumba, and Jason Yosinski. Hamiltonian neural networks. arXiv preprint arXiv:1906.01563, 2019.

[78] E C Neyts, B J Thijsse, M J Mees, K M Bal, and G Pourtois. Establishing uniform acceptance in force biased monte carlo simulations. Journal of chemical theory and computation, Vol. 8, No. 6, pp. 1865–1869, 2012.

[79] M Rao and B J Berne. On the force bias monte carlo simulation of simple liquids. The Journal of chemical physics, Vol. 71, No. 1, pp. 129–132, 1979.

[80] Bin Chen, J Ilja Siepmann, Kwang J Oh, and Michael L Klein. Aggregation- volume-bias monte carlo simulations of vapor-liquid nucleation barriers for lennard-jonesium. The Journal of chemical physics, Vol. 115, No. 23, pp. 10903– 10913, 2001.

[81] Yuji Sugita and Yuko Okamoto. Replica-exchange molecular dynamics method for protein folding. Chemical physics letters, Vol. 314, No. 1-2, pp. 141–151, 1999.

[82] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid monte carlo. Physics letters B, Vol. 195, No. 2, pp. 216–222, 1987.

[83] Glenn M Torrie and John P Valleau. Nonphysical sampling distributions in monte carlo free-energy estimation: Umbrella sampling. Journal of computational physics, Vol. 23, No. 2, pp. 187–199, 1977.

[84] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 652–660, 2017.

[85] P P Ewald. Die berechnung optischer und elektrostatischer gitterpotentiale. Annals of physics, Vol. 369, No. 3, pp. 253–287, Jan 1921.

[86] Junwei Liu, Huitao Shen, Yang Qi, Zi Y Meng, and Liang Fu. Self-learning monte carlo method and cumulative update in fermion systems. Physical review B, Vol. 95, No. 24, p. 241104, 2017.

[87] Huitao Shen, Junwei Liu, and Liang Fu. Self-learning monte carlo with deep neural networks. Physical review B, Vol. 97, No. 20, p. 205140, 2018.

[88] Yuki Nagai, Huitao Shen, Yang Qi, Junwei Liu, and Liang Fu. Self-learning monte carlo method: Continuous-time algorithm. Physical review B, Vol. 96, No. 16, p. 161102, 2017.

[89] Yuki Nagai, Masahiko Okumura, and Akinori Tanaka. Self-learning monte carlo method with behler-parrinello neural networks. Physical review B, Vol. 101, No. 11, p. 115111, 2020.

[90] Max Tillmann, Borivoje Daki´c, Ren´e Heilmann, Stefan Nolte, Alexander Sza- meit, and Philip Walther. Experimental boson sampling. Nature photonics, Vol. 7, No. 7, pp. 540–544, 2013.

[91] Matthew A Broome, Alessandro Fedrizzi, Saleh Rahimi-Keshari, Justin Dove, Scott Aaronson, Timothy C Ralph, and Andrew G White. Photonic boson sam- pling in a tunable circuit. Science, Vol. 339, No. 6121, pp. 794–798, 2013.

[92] Michele Mosca and Christof Zalka. Exact quantum fourier transforms and dis- crete logarithm algorithms. International journal of quantum information, Vol. 2, No. 01, pp. 91–100, 2004.

[93] Juan M Arrazola, Alain Delgado, Bhaskar R Bardhan, and Seth Lloyd. Quantum- inspired algorithms in practice. arXiv preprint arXiv:1905.10415, 2019.

[94] Wu Deng, Hailong Liu, Junjie Xu, Huimin Zhao, and Yingjie Song. An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE transactions on instrumentation and measurement, Vol. 69, No. 10, pp. 7319– 7327, 2020.

[95] Ahmed A A El-Latif, Bassem Abd-El-Atty, Irfan Mehmood, Khan Muhammad, Salvador E Venegas-Andraca, and Jialiang Peng. Quantum-inspired blockchain- based cybersecurity: securing smart edge utilities in iot-based smart cities. In- formation processing & management, Vol. 58, No. 4, p. 102549, 2021.

[96] Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, and Jay M Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature, Vol. 549, No. 7671, pp. 242–246, 2017.

[97] Katsuhiro Endo, Taichi Nakamura, Keisuke Fujii, and Naoki Yamamoto. Quan- tum self-learning monte carlo and quantum-inspired fourier transform sampler. Physical review research, Vol. 2, No. 4, p. 043442, 2020.

[98] Daniel E Browne. Efficient classical simulation of the quantum fourier transform. New journal of physics, Vol. 9, No. 5, p. 146, 2007.

[99] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, Vol. 12, No. 1, pp. 145–151, 1999.

参考文献をもっと見る

全国の大学の
卒論・修論・学位論文

一発検索!

この論文の関連論文を見る