リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

大学・研究所にある論文を検索できる 「深層学習を用いた一階述語論理による推論に関する研究 (本文)」の論文概要。リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

コピーが完了しました

URLをコピーしました

論文の公開元へ論文の公開元へ
書き出し

深層学習を用いた一階述語論理による推論に関する研究 (本文)

本田, 裕 慶應義塾大学

2022.03.23

概要

本研究の目的は,深層学習による記号推論を行うことで,ビッグデータを扱うことを可能とするロバストで解釈可能な推論手法を確立することである.従来の人工知能による記号推論は,人が中身を解釈することは容易であるが,ビッグデータに含まれるデータの曖昧性に対処することは困難である.他方で,深層学習はデータの曖昧性に対してロバストであるものの,人が中身を解釈することは困難である.そこで本研究では,従来の人工知能による記号推論と深層学習の手法を融合することにより,ロバスト性と解釈可能性の両方を備える推論手法を確立することを目的とする.

前述の推論手法を確立するために,曖昧性を含む大量のデータから知識ベースを構築し,それらを一階述語論理で表現し,深層学習により学習を行う.具体的には,(1)Web上のデータから構築した知識ベースをもとに演繹推論を行う手法(第2章),(2)Web上のデータから構築した知識ベースをもとに類推を行う手法(第3章),(3)学習済みの深層強化学習モデルから構築した知識ベースをもとに演繹推論を行う手法(第4章)を提案することによって実現する.

本章は以下のように構成される.まず,1.2節では記号推論の研究,1.3節ではニューラルネットワークの研究について概説する.1.4節では,1.2節と1.3節を踏まえて,ニューラルネットワークを用いた記号推論の研究について述べる.1.5節で本研究の位置づけについて述べた後,1.6節で本論文の構成についてまとめる.

この論文で使われている画像

参考文献

[1] G. Boole, The Mathematical Analysis of Logic: Being an Essay Towards a Calculus of Deductive Reasoning. Cambridge, UK: Cambridge University Press, 2009, p.92.

[2] F. Gottlob, Begriffsschrift: eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Paris, FR: Hachette Livre Bnf, 2012, p.105.

[3] K. Gödel, “Über die Vollständigkeit des Logikkalküls”, doctoral dissertation, University of Vienna, 1929.

[4] J. McCarthy, “Programs with Common Sense,” in Semantic Information Processing, 1968, pp. 403–418.

[5] A. Colmerauer and P. Roussel, “The birth of Prolog,” ACM SIGPLAN Notices, vol. 28, no. 3, pp. 37–52, Mar. 1993, doi: 10.1145/155360.155362.

[6] D. C. Luckham and N. Suzuki, “Automatic program verification V: verification-oriented proof rules for arrays, records and pointers,” Stanford University, Stanford, CA, USA, Technical Report, 1976.

[7] R. K. Lindsay, B. G. Buchanan, E. A. Feigenbaum, and J. Lederberg, Applications of Artificial Intelligence for Organic Chemistry: The Dendral Project. New York, NY, USA: McGraw-Hill, 1980, p. 203.

[8] J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, Mar. 1986, doi: 10.1007/BF00116251.

[9] S. Muggleton, “Inductive logic programming,” New Generation Computing, vol. 8, no. 4, pp. 295–318, Feb. 1991, doi: 10.1007/BF03037089.

[10] G. Brewka, Nonmonotonic Reasoning: Logical Foundations of Commonsense. Cambridge, UK: Cambridge University Press, 1991, p.184.

[11] J. Doyle, “The ins and outs of reason maintenance,” in Proceedings of the Eighth international joint conference on Artificial intelligence - Volume 1, San Francisco, CA, USA, Aug. 1983, pp. 349–351.

[12] A. C. Kakas, R. A. Kowalski, and F. Toni, “Abductive Logic Programming,” Journal of Logic and Computation, vol. 2, no. 6, pp. 719–770, Dec. 1992, doi: 10.1093/logcom/2.6.719.

[13] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biophysics, vol. 5, no. 4, pp. 115–133, Dec. 1943, doi: 10.1007/BF02478259.

[14] F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958, doi: 10.1037/h0042519.

[15] M. Minsky and S. Papert, Perceptrons: An Introduction to Computational Geometry. Expanded Edition, MA, USA: The MIT Press, 1987, p. 312.

[16] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back- propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, Oct. 1986, doi: 10.1038/323533a0.

[17] G. E. Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science, Jul. 2006, doi: 10.1126/science.1127647.

[18] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, Jul. 2006, doi: 10.1162/neco.2006.18.7.1527.

[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, Red Hook, NY, USA, Dec. 2012, pp. 1097–1105.

[20] G. E. Hinton, “Preface to the Special Issue on Connectionist Symbol Processing,” Artificial Intelligence, vol. 46, no. 1–2, pp. 1–4, 1990, doi: 10.1016/0004-3702(90)90002-h.

[21] D. S. Touretzky, “BoltzCONS: Dynamic symbol structures in a connectionist network,” Artificial Intelligence, vol. 46, no. 1, pp. 5–46, Nov. 1990, doi: 10.1016/0004-3702(90)90003- I.

[22] M. Schlichtkrull, T. N. Kipf, P. Bloem, R. van den Berg, I. Titov, and M. Welling, “Modeling Relational Data with Graph Convolutional Networks,” 2017, arXiv:1703.06103. [Online]. Available: http://arxiv.org/abs/1703.06103

[23] T. Trouillon, J. Welbl, S. Riedel, É. Gaussier, and G. Bouchard, “Complex embeddings for simple link prediction,” in Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, New York, NY, USA, Jun. 2016, pp. 2071– 2080.

[24] T. Rocktäschel and S. Riedel, “End-to-end differentiable proving,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, Dec. 2017, pp. 3791–3803.

[25] P. Minervini, M. Bosnjak, T. Rocktäschel, and S. Riedel, “Towards Neural Theorem Proving at Scale,” 2018, arXiv:1807.08204. [Online]. Available: http://arxiv.org/abs/1807.08204

[26] H. Dong, J. Mao, T. Lin, C. Wang, L. Li, and D. Zhou, “Neural logic machines,” in Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 2019, pp. 1-22

[27] S. Tellex, B. Katz, J. Lin, A. Fernandes, and G. Marton, “Quantitative evaluation of passage retrieval algorithms for question answering,” in Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, New York, NY, USA, Jul. 2003, pp. 41–47. doi: 10.1145/860435.860445.

[28] R. Sequiera G. Baruah, Z. Tu, S. Mohammed, J. Rao, H. Zhang, and J. Lin,“Exploring the Effectiveness of Convolutional Neural Networks for Answer Selection in End-to-End Question Answering,” 2017, arXiv:1707.07804. [Online]. Available: http://arxiv.org/abs/1707.07804

[29] N. T. Thomas, “An e-business chatbot using AIML and LSA,” in Proceedings of 2016 International Conference on Advances in Computing, Communications and Informatics, Sep. 2016, pp. 2740–2742. doi: 10.1109/ICACCI.2016.7732476.

[30] L. Cui, S. Huang, F. Wei, C. Tan, C. Duan, and M. Zhou, “SuperAgent: A Customer Service Chatbot for E-commerce Websites,” in Proceedings of 55th Annual Meeting of the Association for Computational Linguistics-System Demonstrations, Vancouver, Canada, Jul. 2017, pp. 97– 102.

[31] H. Bhargava and D. Power, “Decision support systems and Web technologies: A status report,” in Proceedings of Americas’ Conference on Information Systems, Boston, MA, USA, Dec. 2001, p. 46.

[32] M. S. Kohn et al., “IBM’s Health Analytics and Clinical Decision Support,” Yearbook of Medical Informatics, vol. 9, pp. 154–162, Aug. 2014, doi: 10.15265/IY-2014-0002.

[33] H. Honda and M. Hagiwara, “Question Answering Systems With Deep Learning-Based Symbolic Processing,” IEEE Access, vol. 7, pp. 152368–152378, 2019, doi: 10.1109/ACCESS.2019.2948081.

[34] J. W. Shavlik and G. G. Towell, “An Approach to Combining Explanation-based and Neural Learning Algorithms,” Connection Science, vol. 1, no. 3, pp. 231–253, Jan. 1989, doi: 10.1080/09540098908915640.

[35] G. G. Towell and J. W. Shavlik, “Knowledge-based artificial neural networks,” Artificial Intelligence, vol. 70, no. 1, pp. 119–165, Oct. 1994, doi: 10.1016/0004-3702(94)90105-8.

[36] A. S. Avila Garcez and G. Zaverucha, “The Connectionist Inductive Learning and Logic Programming System,” Applied Intelligence, vol. 11, no. 1, pp. 59–77, Jul. 1999, doi: 10.1023/A:1008328630915.

[37] L. Shastri, “Neurally motivated constraints on the working memory capacity of a production system for parallel processing: Implications of a connectionist model based on temporal synchrony,” in Proceedings of 14th Annual Conference of the Cognitive Science Society, Bloomington, IN, USA: Psychology Press, vol. 14, Jul./Aug. 1992, p. 159.

[38] L. Ding, “Neural Prolog-the concepts, construction and mechanism,” in Proceedings of 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century, Oct. 1995, vol. 4, pp. 3603–3608 vol.4. doi: 10.1109/ICSMC.1995.538347.

[39] M. V. M. França, G. Zaverucha, and A. S. d’Avila Garcez, “Fast relational learning using bottom clause propositionalization with artificial neural networks,” Machine Learning, vol. 94, no. 1, pp. 81–104, Jan. 2014, doi: 10.1007/s10994-013-5392-1.

[40] E. Komendantskaya, “Unification neural networks: unification by error-correction learning,” Logic Journal of the IGPL, vol. 19, no. 6, pp. 821–847, Dec. 2011, doi: 10.1093/jigpal/jzq012.

[41] S. Hölldobler, “A structured connectionist unification algorithm,” in Proceedings of the eighth National conference on Artificial intelligence - Volume 1, Boston, Massachusetts, Jul. 1990, pp. 587–593.

[42] G. Sourek, V. Aschenbrenner, F. Zelezny, S. Schockaert, and O. Kuzelka, “Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures,” Journal of Artificial Intelligence Research, vol. 62, pp. 69–100, May 2018, doi: 10.1613/jair.1.11203.

[43] W. W. Cohen, “TensorLog: A Differentiable Deductive Database,” 2016, arXiv:1605.06523. [Online]. Available: http://arxiv.org/abs/1605.06523

[44] L. Serafini and A.S. d’Avila Garcez, “Logic tensor networks: Deep learning and logical reasoning from data and knowledge,” in Proceedings of 11th International Workshop Neural- Symbolic Learning and Reasoning, New York, NY, USA, 2016, pp. 1-12.

[45] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei,“Language Models are Few-Shot Learners,”2020, arXiv:2005.14165. [Online]. Available: http://arxiv.org/abs/2005.14165

[46] A. Vaswani N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez,L. Kaiser, and I. Polosukhi, “Attention is all you need,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, Dec. 2017, pp. 6000–6010.

[47] D. Mostafa, G. Stephan, V. Oriol, U. Jakob, and K. Lukasz, “Universal Transformers”, in Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, May. 2019.

[48] M. Nye, M.H. Tessler, J.B. Tenenbaum, and B.M. Lake, “Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning”, in Proceedings of the 35th International Conference on Neural Information Processing Systems, Dec. 2021.

[49] T. Lai, T. Bui, S. Li, and N. Lipka, “A Simple End-to-End Question Answering Model for Product Information,” in Proceedings of the First Workshop on Economics and Natural Language Processing, Melbourne, Australia, Jul. 2018, pp. 38–43. doi: 10.18653/v1/W18- 3105.

[50] B. Peng, Z. Lu, H. Li, and K.-F. Wong, “Towards Neural Network-based Reasoning,” 2015, arXiv:1508.05508. [Online]. Available: http://arxiv.org/abs/1508.05508

[51] D. Weissenborn, “Separating Answers from Queries for Neural Reading Comprehension,” 2016, arXiv:1607.03316. [Online]. Available: http://arxiv.org/abs/1607.03316

[52] Y. Shen, P. Huang, J. Gao, and W. Chen, “Reasonet: Learning to stop reading in machine comprehension,” in Proceedings of 23rd ACM SIGKDD International Conference Knowledge Discovery Data Mining, Barcelona, Spain, Aug. 2017, pp. 1047-1055.

[53] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, Cambridge, MA, USA, Dec. 2014, pp. 3104–3112.

[54] D. Bahdanau, K. Cho, and Y. Bengio. “Neural Machine Translation by Jointly Learning to Align and Translate,” in Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, Jun. 2015, pp. 1–15.

[55] L. Dong and M. Lapata, “Language to Logical Form with Neural Attention,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, Aug. 2016, pp. 33–43. doi: 10.18653/v1/P16-1004.

[56] P. Yin and G. Neubig, “TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Brussels, Belgium, Nov. 2018, pp. 7–12. doi: 10.18653/v1/D18-2002.

[57] V. L. Shiv and C. Quirk, “Novel positional encodings to enable tree-based transformers,” in Proceedings of the 33rd International Conference on Neural Information Processing Systems, Red Hook, NY, USA: Curran Associates Inc., 2019, pp. 12081–12091.

[58] D. Dalal and B. V. Galbraith, “Evaluating Sequence-to-Sequence Learning Models for If-Then Program Synthesis,” 2020, arXiv:2002.03485. [Online]. Available: http://arxiv.org/abs/2002.03485

[59] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.

[60] I. Bratko, Prolog Programming for Artificial Intelligence. 2nd Edition. Reading, MA, USA: Addison-Wesley, 1990, p. 597.

[61] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” in Proceedings of the International Conference on Learning Representations, Scottsdale, AZ, USA, 2013, pp. 1-12.

[62] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed Representations of Words and Phrases and their Compositionality,” in Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, Lake Tahoe, NV, USA, 2013, pp. 3111-3119.

[63] T. Mikolov, W. Yih, and G. Zweig, “Linguistic Regularities in Continuous Space Word Representations,” in Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, Georgia, Jun. 2013, pp. 746–751.

[64] F. Gray, “Pulse code communication,” U.S. Patent 2 632 058 A, Mar. 17, 1953.

[65] H. Kanayama, Y. Miyao, and J. Prager, “Answering Yes/no questions via question inversion,” in Proceedings of the 24th International Conference on Computational Linguistics, Mumbai, India, Dec. 2012, pp. 1377-1392.

[66] D. Ravichandran and E. Hovy, “Learning surface text patterns for a Question Answering system,” in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, USA, Jul. 2002, pp. 41–47. doi: 10.3115/1073083.1073092.

[67] R. Higashinaka and H. Isozaki, “Corpus-based Question Answering for Why-questions,” in Proceedings of International Joint Conference on Natural Language Processing, Hyderabad, India, 2008, pp. 418-425.

[68] J. L. Elman, “Finding structure in time,” Cognitive Science, vol. 14, no. 2, pp. 179–211, Apr. 1990, doi: 10.1016/0364-0213(90)90002-E.

[69] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” in IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673-2681, Nov. 1997, doi: 10.1109/78.650093.

[70] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, “Maxout Networks,” in Proceedings of the 30th International Conference on Machine Learning, May 2013, pp. 1319–1327.

[71] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” 2017, arXiv:1412.6980. [Online]. Available: http://arxiv.org/abs/1412.6980

[72] A. Maas, A. Hannu, and A. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proceedings of 30th International Conference on Machine Learning, Atlanta, GA, USA, 2013, p. 3.

[73] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, Jun. 2019, pp. 4171–4186. doi: 10.18653/v1/N19-1423.

[74] Kinsources: A Collaborative Web Platform for Kinship Data Sharing. Accessed: May 19, 2018. [Online]. Available: https://www.kinsources.net/

[75] L. R. Tang and R. J. Mooney, “Automated Construction of Database Interfaces: Intergrating Statistical and Relational Learning for Semantic Parsing,” in Proceedings of 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, Hong Kong, China, Oct. 2000, pp. 133–141. doi: 10.3115/1117794.1117811.

[76] Y. Bekkers, B. Caner, O. Ridoux, and L. Ungaro, “MALI: a memory with a real-time garbage collector for implementing logic programming languages,” in Proceedings of the 3rd Symposium on Logic Programming, Salt Lake City, UT, USA, Sep. 1986, pp. 258-264.

[77] K. Appleby, M. Carlsson, S. Haridi, and D. Sahlin, “Garbarge collection for Prolog based on WAM,” Communications of the ACM, vol. 31, no. 6, pp. 719–741, Jun. 1988, doi: 10.1145/62959.62968.

[78] Y. Bekkers, O. Ridoux, and L.Ungaro, “Dynamic Memory Management for Sequential Logic Programming Languages,” in Proceedings of Memory Management: International Workshop, St. Malo, France, Sep. 1992, pp. 82-102.

[79] N. Cingillioglu and A. Russo, “DeepLogic: Towards End-to-End Differentiable Logical Reasoning,” 2018, arXiv:1805.07433. [Online]. Available: http://arxiv.org/abs/1805.07433

[80] S. Reed, Y. Zhang, Y. Zhang, and H. Lee, “Deep visual analogy-making,” in Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, Cambridge, MA, USA, Dec. 2015, pp. 1252–1260.

[81] F. Hill, A. Santoro, D. Barrett, A. Morcos, T. Lillicrap, “Learning to Make Analogies by Contrasting Abstract Relational Structure,” in Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 2019, pp. 1-18.

[82] M. Hagiwara and Y. Anzai, “Analogical reasoning by neural network,” in Proceedings of IJCNN-91-Seattle International Joint Conference on Neural Networks, Seattle, WA, USA, Jul. 1991, p. 995. doi: 10.1109/IJCNN.1991.155664.

[83] Y. Salu, “A neural network for analogical reasoning,” in Proceedings of 1994 IEEE International Conference on Neural Networks, Orlando, FL, USA, Jun. 1994, pp. 4772–4777. doi: 10.1109/ICNN.1994.375047.

[84] R. Kumaraswamy, P. Odom, K. Kersting, D. Leake, and S. Natarajan, “Transfer Learning via Relational Type Matching,” in Proceedings of 2015 IEEE International Conference on Data Mining, Nov. 2015, pp. 811–816. doi: 10.1109/ICDM.2015.138.

[85] L. Mihalkova and R. Mooney, “Transfer learning by mapping with minimal target data,” in Proceedings of AAAI Workshop Transfer Learning Complex Tasks, Chicago, IL, USA, 2008, pp. 1-6.

[86] H. Honda and M. Hagiwara, “Analogical Reasoning With Deep Learning-Based Symbolic Processing,” IEEE Access, vol. 9, pp. 121859–121870, 2021, doi: 10.1109/ACCESS.2021.3109443.

[87] J. G. Carbonell, “A computational model of analogical problem solving,” in Proceedings of the 7th international joint conference on Artificial intelligence - Volume 1, San Francisco, CA, USA, Aug. 1981, pp. 147–152.

[88] D. Gentner, “Structure-mapping: A theoretical framework for analogy,” Cognitive Science, vol. 7, no. 2, pp. 155–170, Apr. 1983, doi: 10.1016/S0364-0213(83)80009-3.

[89] B. Falkenhainer, K. D. Forbus, and D. Gentner, “The structure-mapping engine: Algorithm and examples,” Artificial Intelligence, vol. 41, no. 1, pp. 1–63, Nov. 1989, doi: 10.1016/0004- 3702(89)90077-5.

[90] K. J. Holyoak and P. Thagard, “Analogical mapping by constraint satisfaction,” Cognitive Science, vol. 13, no. 3, pp. 295–355, 1989, doi: 10.1207/s15516709cog1303_1.

[91] D. Hofstadter. Fluid concepts and creative analogies: Computer Models of the Fundamental Mechanisms of Thought. NY, USA: Basic Books, 1996, p. 528.

[92] J. E. Hummel and K. J. Holyoak, “Distributed representations of structure: A theory of analogical access and mapping,” Psychological Review, vol. 104, no. 3, pp. 427–466, 1997, doi: 10.1037/0033-295X.104.3.427.

[93] L. B. Larkey and B. C. Love, “CAB: Connectionist Analogy Builder,” Cognitive Science, vol. 27, no. 5, pp. 781–794, 2003, doi: 10.1016/S0364-0213(03)00066-1.

[94] D. Gentner and K. D. Forbus, “Computational models of analogy,” Wiley Interdiscip Rev Cogn Sci, vol. 2, no. 3, pp. 266–276, May 2011, doi: 10.1002/wcs.105.

[95] K. J. Holyoak, “The Pragmatics of Analogical Transfer,” in Psychology of Learning and Motivation, vol. 19, G. H. Bower, Ed. Academic Press, 1985, pp. 59–87. doi: 10.1016/S0079- 7421(08)60524-1.

[96] B. A. Spellman and K. J. Holyoak, “Pragmatics in analogical mapping,” Cogn Psychol, vol. 31, no. 3, pp. 307–346, Dec. 1996, doi: 10.1006/cogp.1996.0019.

[97] H. Liu, Y. Wu, and Y. Yang, “Analogical Inference for Multi-relational Embeddings,” in Proceedings of the 34th International Conference on Machine Learning, Jul. 2017, pp. 2168– 2178.

[98] Y. Wu, X. Liu, Y. Feng, Z. Wang, R. Yan, and D. Zhao, “Relation-aware entity alignment for heterogeneous knowledge graphs,” in Proceedings of 28th International Joint Conferences on Artificial Intelligence, Aug. 2019, pp. 5278-5284.

[99] H. Wang and Q. Yang, “Transfer Learning by Structural Analogy,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 25, no. 1, Art. no. 1, Aug. 2011.

[100] T. Hinrichs and K. D. Forbus, “Transfer Learning through Analogy in Games,” AI Magazine, vol. 32, no. 1, Art. no. 1, Mar. 2011, doi: 10.1609/aimag.v32i1.2332.

[101] R. Kumaraswamy, N. Ramanan, P. Odom, and S. Natarajan, “Interactive Transfer Learning in Relational Domains,” Künstl Intell, vol. 34, no. 2, pp. 181–192, Jun. 2020, doi: 10.1007/s13218-020-00659-6.

[102] H. Larochelle, D. Erhan, and Y. Bengio, “Zero-data learning of new tasks,” in Proceedings of the 23rd national conference on Artificial intelligence - Volume 2, Chicago, Illinois, Jul. 2008, pp. 646–651.

[103] M. Palatucci, D. Pomerleau, G. Hinton, and T. M. Mitchell, “Zero-shot learning with semantic output codes,” in Proceedings of the 22nd International Conference on Neural Information Processing Systems, Red Hook, NY, USA, Dec. 2009, pp. 1410–1418.

[104] R. Socher, M. Ganjoo, C. D. Manning, and A. Y. Ng, “Zero-shot learning through cross-modal transfer,” in Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1, Red Hook, NY, USA, Dec. 2013, pp. 935–943.

[105] S. Reed, Z. Akata, H. Lee, and B. Schiele, “Learning Deep Representations of Fine-Grained Visual Descriptions,” in Proceedings of 2016 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp. 49–58. doi: 10.1109/CVPR.2016.13.

[106] Z. Xie, W. Cao, X. Wang, Z. Ming, J. Zhang, and J. Zhang, “A biologically inspired feature enhancement framework for zero-shot learning,” in Proceedings of 7th IEEE International Conference on Cyber Security and Cloud Computing /6th IEEE International Conference on Edge Computing, Aug. 2020, pp. 120-125.

[107] Z. Xie, W. Cao, and Z. Ming, “A further study on biologically inspired feature enhancement in zero-shot learning,” International Journal of Machine Learning and Cybernetics, vol. 12, Jan. 2021, doi: 10.1007/s13042-020-01170-y.

[108] IMDb: Internet Movie Database. Accessed: Nov. 29, 2019. [Online]. Available: http://klog.dinfo.unifi.it/data/imdb/ext.pl.gz

[109] K. Sinha, S. Sodhani, J. Dong, J. Pineau, and W. L. Hamilton, “CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text,” 2019, arXiv:1908.06177. [Online]. Available: http://arxiv.org/abs/1908.06177

[110] W. B and H. M. E, “Adaptive switching circuits.,” IRE Wescon Conv Rec, vol. 4, no. 4, pp. 96–101, 1960.

[111] C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3, pp. 279– 292, May 1992, doi: 10.1007/BF00992698.

[112] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing Atari with Deep Reinforcement Learning,” 2013, arXiv:1312.5602. [Online]. Available: http://arxiv.org/abs/1312.5602

[113] D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016, doi: 10.1038/nature16961.

[114] Z. C. Lipton, “The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery.,” Queue, vol. 16, no. 3, pp. 31–57, Jun. 2018, doi: 10.1145/3236386.3241340.

[115] F. Doshi-Velez, and B. Kim, “Towards a rigorous science of interpretable machine learning,” 2017, arXiv:1702.08608. [Online]. Available: http://arxiv.org/abs/1702.08608

[116] A. A. Freitas, “Comprehensible classification models: a position paper,” ACM SIGKDD Explorations Newsletter, vol. 15, no. 1, pp. 1–10, Mar. 2014, doi: 10.1145/2594473.2594475.

[117] G. Montavon, W. Samek, and K.-R. Müller, “Methods for interpreting and understanding deep neural networks,” Digital Signal Processing, vol. 73, pp. 1–15, Feb. 2018, doi: 10.1016/j.dsp.2017.10.011.

[118] A. Barredo Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S. Gil-Lopez, D. Molina, R. Benjamins, R. Chatila, and F. Herrera,“Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, Jun. 2020, doi: 10.1016/j.inffus.2019.12.012.

[119] A. Verma, V. Murali, R. Singh, P. Kohli, and S. Chaudhuri, “Programmatically Interpretable Reinforcement Learning,” in Proceedings of the 35th International Conference on Machine Learning, Jul. 2018, pp. 5045–5054.

[120] D. Hein, A. Hentschel, T. Runkler, and S. Udluft, “Particle swarm optimization for generating interpretable fuzzy reinforcement learning policies,” Engineering Applications of Artificial Intelligence, vol. 65, pp. 87–98, 636424128000000000, doi: 10.1016/j.engappai.2017.07.005.

[121] G. Liu, O. Schulte, W. Zhu, and Q. Li, “Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees,” in Proceedings of European Conference on Machine Learning / European Conference on Principles of Data Mining and Knowledge Discovery, Dublin, Ireland, Sep. 2018, pp. 414–429. doi: 10.1007/978-3-030-10928-8_25.

[122] T. Shu, C. Xiong, and R. Socher, “Hierarchical and interpretable skill acquisition in multi-task reinforcement learning,” 2017, arXiv:1712.07294. [Online]. Available: http://arxiv.org/abs/1712.07294

[123] P. Sequeira and M. Gervasio, “Interestingness elements for explainable reinforcement learning: Understanding agents, capabilities and limitations,” 2019, arXiv:1912.09007. [Online]. Available: http://arxiv.org/abs/1912.09007

[124] Y. Fukuchi, M. Osawa, H. Yamakawa, and M. Imai, “Autonomous self-explanation of behavior for interactive reinforcement learning agents,” in Proceedings of the 5th International Conference on Human Agent Interaction, Oct. 2017, pp. 97–101. doi: 10.1145/3125739.3125746.

[125] J. H. Lee, “Complementary reinforcement learning towards explainable agents,” 2019, arXiv:1901.00188. [Online]. Available: http://arxiv.org/abs/1901.00188

[126] J. Waa, J. Diggelen, K. Bosch, M. Neerincx, “Contrastive explanations for reinforcement learning in terms of expected consequences,” in Proceedings of the IJCAI 2018 Workshop on Explainable Artificial Intelligence, Stockholm, Sweden Jul. 2018.

[127] P. Madumal, T. Miller, L. Sonenberg, and F. Vetere, “Explainable reinforcement learning through a causal lens,” 2019, arXiv:1905.10958. [Online]. Available: http://arxiv.org/abs/1905.10958

[128] Y. Coppens, K. Efthymiadis, T. Lenaerts, and A. Nowe, “Distilling Deep Reinforcement Learning Policies in Soft Decision Trees,” in Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence, Macau, China, Aug. 2019, pp. 1–6.

[129] P. Minervini, S. Riedel, P. Stenetorp, E. Grefenstette, and T. Rocktäschel, “Learning Reasoning Strategies in End-to-End Differentiable Proving,” in Proceedings of the 37th International Conference on Machine Learning, Nov. 2020, pp. 6938–6949.

[130] H. Honda and M. Hagiwara, (in press). “Deep-Learning-Based Fuzzy Symbolic Processing with Agents Capable of Knowledge Communication,” In Proceedings of 14th International Conference on Agents and Artificial Intelligence, Feb. 2022.

[131] G. Kuhlmann, P. Stone, R. Mooney, and J. Shavlik, “Guiding a reinforcement learner with natural language advice: Initial results in robocup soccer,” In Proceedings of the AAAI-2004 Workshop on Supervisory Control of Learning and Adaptive Systems, Jul. 2004.

[132] D. K. Misra, J. Sung, K. Lee, and A. Saxena, “Tell me dave: Context-sensitive grounding of natural language to manipulation instructions,” The International Journal of Robotics Research, vol. 35, no. 1-3, pp. 281–300, 2016.

[133] J. Kim, T. Misu, Y.-T. Chen, A. Tawari, and J. Canny, “Grounding Human-To-Vehicle Advice for Self-Driving Vehicles,” in Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2019, pp. 10583–10591. doi: 10.1109/CVPR.2019.01084.

[134] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “Openai gym,” 2016, arXiv:1606.01540. [Online]. Available: http://arxiv.org/abs/1606.01540

[135] C. E. Osgood, G. Suci, and P. Tannenbaum, The measurement of meaning, IL, USA: University of Illinois Press, 1957, p.360.

[136] C. E. Osgood, W. H. May, and M. S. Miron, Cross-Cultural Universals of Affective Meaning. IL, USA: University of Illinois Press, 1975, p. 486.

[137] R. Likert, “A technique for the measurement of attitudes,” Archives of Psychology, vol. 22 140, pp. 55–55, 1932.

[138] H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-Learning,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, Feb. 2016, pp. 2094–2100.

[139] G. Gazdar,E. Klein,G. K. Pullum, and I. A. Sag, Generalized Phrase Structure Grammar. MA, USA: Harvard University Press, 1985, p. 320.

[140] C. Pollard and I. A. Sag, Information-based Syntax and Semantics. CA, USA: CSLI Publications, 1987, p. 233.

[141] J. Bresnan, A. Asudeh, I. Toivonen, and S. Wechsler, Lexical Functional Syntax. 2nd Edition, NJ, USA: Wiley-Blackwell, 2013, p. 536.

[142] H. Honda and M. Hagiwara, “Reproduction of Deep Reinforcement Learning Model with Fuzzy Symbolic Representation,” In Proceedings of the 22nd International Symposium on Advanced Intelligent Systems, Cheongju, Korea, Dec. 2021.

参考文献をもっと見る

全国の大学の
卒論・修論・学位論文

一発検索!

この論文の関連論文を見る