[1] D. Aussel. New developments in quasiconvex optimization. In S. A. R. Al-Mezel,
F. R. M. Al-Solamy, and Q. H. Ansari, editors, Fixed Point Theory, Variational
Analysis, and Optimization, chapter 5, pages 139–169. Chapman and Hall/CRC,
2014.
[2] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator
Theory in Hilbert Spaces. Springer International Publishing, second edition, 2017.
[3] C. Beltran and F. J. Heredia. An effective line search for the subgradient method.
Journal of Optimization Theory and Applications, 125(1):1–18, 2005.
[4] V. Berinde. Iterative Approximation of Fixed Points, volume 1912 of Lecture
Notes in Mathematics. Springer–Verlag, Berlin Heidelberg, 2007.
[5] D. P. Bertsekas, A. Nedic, and A. E. Ozdaglar. Convex Analysis and Optimization. Athena Scientific, 2003.
[6] L. Bottou. Stochastic gradient learning in neural networks. In Proceedings of
Neuro-Nˆımes 91, Nimes, France, 1991. EC2.
[7] S. P. Bradley and S. C. Frey. Fractional programming with homogeneous functions. Operations Research, 22(2):350–357, 1974.
[8] Y. Censor and A. Segal. Algorithms for the quasiconvex feasibility problem.
Journal of Computational and Applied Mathematics, 185(1):34–50, 2006.
[9] Y. Cheung and J. Lou. Proximal average approximated incremental gradient
descent for composite penalty regularized empirical risk minimization. Machine
Learning, 106(4):595–622, 2017.
[10] P. L. Combettes. A block-iterative surrogate constraint splitting method for
quadratic signal recovery. IEEE Transactions on Signal Processing, 51(7):1771–
1782, 2003.
[11] P. L. Combettes and P. Bondon. Hard-constrained inconsistent signal feasibility
problems. IEEE Transactions on Signal Processing, 47(9):2460–2468, 1999.
[12] P. L. Combettes and J. C. Pesquet. A douglas–rachford splitting approach to
nonsmooth convex variational signal recovery. IEEE Journal of Selected Topics
in Signal Processing, 1(4):564–574, 2007.
[13] N. Cristianini, J. Shawe-Taylor, et al. An introduction to support vector machines
and other kernel-based learning methods. Cambridge university press, 2000.
[14] J. Y. B. Cruz and W. D. Oliveira. On weak and strong convergence of the projected gradient method for convex optimization in real Hilbert spaces. Numerical
Functional Analysis and Optimization, 37(2):129–144, 2016.
[15] D. Dheeru and E. K. Taniskidou. UCI machine learning repository, 2017.
116
[16] M. E. Dyer. Calculating surrogate constraints. Mathematical Programming,
19(1):255–278, 1980.
[17] K. Fujiwara, K. Hishinuma, and H. Iiduka. Evaluation of stochastic approximation algorithm and variants for learning support vector machines. Linear and
Nonlinear Analysis, 4(1):29–61, 2018.
[18] H. Greenberg and W. Pierskalla. Quasi-conjugate functions and surrogate duality.
Cahiers Centre Etudes
Recherche Op´er, 15:437–448, 1973.
[19] N. Hadjisavvas. Convexity, generalized convexity, and applications. In S. A. R.
Al-Mezel, F. R. M. Al-Solamy, and Q. H. Ansari, editors, Fixed Point Theory,
Variational Analysis, and Optimization, chapter 4, pages 139–169. Chapman and
Hall/CRC, 2014.
[20] B. Halpern. Fixed points of nonexpansive maps. Bulletin of the American Mathematical Society, 73:957–961, 1967.
[21] G. H. Hardy, J. E. Littlewood, and G. P´olya. Inequalities. Cambridge University
Press, second edition, 1988.
[22] W. L. Hare and Y. Lucet. Derivative-free optimization via proximal point methods. Journal of Optimization Theory and Applications, 160(1):204–220, 2014.
[23] Y. Hayashi and H. Iiduka. Optimality and convergence for convex ensemble
learning with sparsity and diversity based on fixed point optimization. Neurocomputing, 273:367–372, 2018.
[24] K. Hishinuma and H. Iiduka. Parallel subgradient method for nonsmooth convex
optimization with a simple constraint. Linear and Nonlinear Analysis, 1(1):67–
77, 2015.
[25] K. Hishinuma and H. Iiduka. Convergence property, computational performance,
and usability of fixed point quasiconvex subgradient method. the 6th Asian
Conference on Nonlinear Analysis and Optimization (Oral), 2017.
[26] K. Hishinuma and H. Iiduka. Flexible stepsize selection of subgradient methods for constrained convex optimization. the 10th Anniversary Conference on
Nonlinear Analysis and Convex Analysis (Oral), 2017.
[27] K. Hishinuma and H. Iiduka. Iterative method for solving constrained quasiconvex optimization problems based on the Krasnosel’ski˘ı-Mann fixed point approximation method. RIMS Workshop on Nonlinear Analysis and Convex Analysis
(Oral), 2017.
[28] K. Hishinuma and H. Iiduka. Convergence analysis of incremental and parallel
line search subgradient methods in Hilbert space. Journal of Nonlinear and
Convex Analysis, 20(9):1937–1947, 2019.
[29] K. Hishinuma and H. Iiduka. Convergence rate analyses of fixed point quasiconvex subgradient method. Joint Conference NACA-ICOTA2019: International
Conference on Nonlinear Analysis and Convex Analysis, International Conference on Optimization: Techniques and Applications (Oral), 2019.
[30] K. Hishinuma and H. Iiduka. Incremental and parallel machine learning algorithms with automated learning rate adjustments. Frontiers in Robotics and AI,
6:77, 2019.
117
[31] K. Hishinuma and H. Iiduka. Fixed point quasiconvex subgradient method. European Journal of Operational Research, 282(2):428–437, 2020.
[32] K. Hishinuma and H. Iiduka. Supplementary data S1 for the article entitled
“fixed point quasiconvex subgradient method”. https://doi.org/10.1016/j.
ejor.2019.09.037, 2020.
[33] Y. Hu, X. Yang, and C.-K. Sim. Inexact subgradient methods for quasi-convex
optimization problems. European Journal of Operational Research, 240(2):315–
327, 2015.
[34] Y. Hu, C. K. W. Yu, and C. Li. Stochastic subgradient method for quasi-convex
optimization problems. Journal of Nonlinear and Convex Analysis, 17(4):711–
724, 2016.
[35] Y. Hu, C. K. W. Yu, C. Li, and X. Yang. Conditional subgradient methods
for constrained quasi-convex optimization problems. Journal of Nonlinear and
Convex Analysis, 17(10):2143–2158, 2016.
[36] H. Iiduka. Iterative algorithm for solving triple-hierarchical constrained optimization problem. Journal of Optimization Theory and Applications, 148(3):580–592,
2011.
[37] H. Iiduka. Fixed point optimization algorithm and its application to power control in CDMA data networks. Mathematical Programming, 133(1):227–242, 2012.
[38] H. Iiduka. Iterative algorithm for triple-hierarchical constrained nonconvex optimization problem and its application to network bandwidth allocation. SIAM
Journal on Optimization, 22(3):862–878, 2012.
[39] H. Iiduka. Fixed point optimization algorithms for distributed optimization in
networked systems. SIAM Journal on Optimization, 23(1):1–26, 2013.
[40] H. Iiduka. Acceleration method for convex optimization over the fixed point set
of a nonexpansive mapping. Mathematical Programming, 149(1):131–165, 2015.
[41] H. Iiduka. Parallel computing subgradient method for nonsmooth convex optimization over the intersection of fixed point sets of nonexpansive mappings.
Fixed Point Theory and Applications, 2015:72, 2015.
[42] H. Iiduka. Convergence analysis of iterative methods for nonsmooth convex optimization over fixed point sets of quasi-nonexpansive mappings. Mathematical
Programming, 159(1):509–538, 2016.
[43] H. Iiduka. Incremental subgradient method for nonsmooth convex optimization
with fixed point constraints. Optimization Methods and Software, 31(5):931–951,
2016.
[44] H. Iiduka. Line search fixed point algorithms based on nonlinear conjugate gradient directions: Application to constrained smooth convex optimization. Fixed
Point Theory and Applications, 2016:77, 2016.
[45] H. Iiduka. Proximal point algorithms for nonsmooth convex optimization with
fixed point constraints. European Journal of Operational Research, 253(2):503–
513, 2016.
[46] H. Iiduka. Almost sure convergence of random projected proximal and subgradient algorithms for distributed nonsmooth convex optimization. Optimization,
66(1):35–59, 2017.
118
[47] H. Iiduka. Distributed optimization for network resource allocation with nonsmooth utility functions. IEEE Transactions on Control of Network Systems,
6(4):1354–1365, 2018.
[48] H. Iiduka. Stochastic fixed point optimization algorithm for classifier ensemble.
IEEE Transactions on Cybernetics, pages 1–11, 2019.
[49] H. Iiduka and K. Hishinuma. Acceleration method combining broadcast and
incremental distributed optimization algorithms. SIAM Journal on Optimization,
24(4):1840–1863, 2014.
[50] H. Iiduka and M. Uchida. Fixed point optimization algorithms for network bandwidth allocation problems with compoundable constraints. IEEE Communications Letters, 15(6):596–598, 2011.
[51] E. Jones, T. Oliphant, P. Peterson, et al. SciPy: Open source scientific tools for
Python, 2001–. [Online; accessed Aug. 16, 2018].
[52] F. Kelly. Charging and rate control for elastic traffic. European transactions on
Telecommunications, 8(1):33–37, 1997.
[53] K. C. Kiwiel. Convergence and efficiency of subgradient methods for quasiconvex
minimization. Mathematical Programming, 90(1):1–25, 2001.
[54] I. V. Konnov. On convergence properties of a subgradient method. Optimization
Methods and Software, 18(1):53–62, 2003.
[55] M. A. Krasnosel’ski˘ı. Two remarks on the method of successive approximations.
Uspekhi Matematicheskikh Nauk, 10(1(63)):123–127, 1955.
[56] T. Larsson, M. Patriksson, and A.-B. Str¨omberg. Conditional subgradient
optimization—theory and applications. European Journal of Operational Research, 88(2):382–403, 1996.
[57] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied
to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
[58] Y. LeCun, C. Cortes, and C. J. C. Burges. The mnist database of handwritten
digits, 1998.
[59] C.-Y. Lee, A. L. Johnson, E. Moreno-Centeno, and T. Kuosmanen. A more
efficient algorithm for convex nonparametric least squares. European Journal of
Operational Research, 227(2):391–400, 2013.
[60] E. Leopold and J. Kindermann. Text categorization with support vector machines. how to represent texts in input space? Machine Learning, 46(1):423–444,
2002.
[61] C.-J. Lin. LIBSVM data: Classification, regression, and multi-label, 2017.
[62] Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in
nonstandard situations. Machine Learning, 46(1):191–202, 2002.
[63] W. R. Mann. Mean value methods in iteration. Proceedings of the American
Mathematical Society, 4:506–510, 1953.
[64] M. Meiss, F. Menczer, S. Fortunato, A. Flammini, and A. Vespignani. Ranking
web sites with real user traffic. In Proc. First ACM International Conference on
Web Search and Data Mining (WSDM), pages 65–75, 2008.
[65] Message Passing Interface Forum. MPI: A Message-Passing Interface Standard,
Version 3.1. High-Performance Computing Center Stuttgart, 2015.
119
[66] A. Nedi´c and D. Bertsekas. Convergence Rate of Incremental Subgradient Algorithms, pages 223–264. Springer US, Boston, MA, 2001.
[67] A. Nedi´c and D. P. Bertsekas. Incremental subgradient methods for nondifferentiable optimization. SIAM Journal on Optimization, 12(1):109–138, 2001.
[68] J. Nocedal and S. J. Wright. Numerical Optimization. Springer-Verlag New York,
second edition, 2006.
[69] T. Oliphant. A guide to NumPy. Trelgol Publishing, USA, 2006.
[70] Z. Opial. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bulletin of the American Mathematical Society, 73(4):591–
597, 1967.
[71] P. S. Pacheco. Parallel Programming with MPI. Morgan Kaufmann, 1996.
[72] C. Pan, C. Yin, N. C. Beaulieu, and J. Yu. Distributed resource allocation in
sdcn-based heterogeneous networks utilizing licensed and unlicensed bands. IEEE
Transactions on Wireless Communications, 17(2):711–721, 2018.
[73] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,
M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine
learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
[74] J.-P. Penot. Are generalized derivatives useful for generalized convex functions?
In J.-P. Crouzeix, J.-E. Martinez-Legaz, and M. Volle, editors, Generalized Convexity, Generalized Monotonicity: Recent Results, Nonconvex Optimization and
Its Applications, volume 27, pages 3–59. Kluwer Academic Publishers, 1998.
[75] F. Plastria. Lower subdifferentiable functions and their minimization by cutting
planes. Journal of Optimization Theory and Applications, 46(1):37–53, 1985.
[76] J. Platt. Sequential minimal optimization: A fast algorithm for training support
vector machines. Technical report, 1998.
[77] B. T. Polyak. Introduction to optimization. translation series in mathematics
and engineering. Optimization Software, 1987.
[78] S. Pradhan, K. Hacioglu, V. Krugler, W. Ward, J. H. Martin, and D. Jurafsky.
Support vector learning for semantic argument classification. Machine Learning,
60(1):11–39, 2005.
[79] S. Raschka. Python machine learning. Packt Publishing Ltd, 2015.
[80] R. T. Rockafellar. Monotone operators associated with saddle-functions and
minimax problems. Nonlinear functional analysis, 18(I):397–407, 1970.
[81] R. T. Rockafellar and R. J.-B. Wets. Variational analysis, volume 317. Springer
Science & Business Media, 2009.
[82] K. Sakurai and H. Iiduka. Acceleration of the Halpern algorithm to search for
a fixed point of a nonexpansive mapping. Fixed Point Theory and Applications,
2014(202), 2014.
[83] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: primal estimated sub-gradient solver for SVM. Mathematical Programming, 127(1):3–30,
2011.
[84] K. Shimizu, K. Hishinuma, and H. Iiduka. Parallel computing proximal
method for nonsmooth convex optimization with fixed point constraints of quasi-
120
[85]
[86]
[87]
[88]
[89]
[90]
[91]
[92]
[93]
[94]
[95]
[96]
nonexpansive mappings. Applied Set-Valued Analysis and Optimization (accepted).
K. Slavakis and I. Yamada. Robust wideband beamforming by the hybrid steepest
descent method. IEEE Transactions on Signal Processing, 55(9):4511–4522, 2007.
I. M. Stancu-Minasian. Fractional Programming: Theory, Methods and Applications. Kluwer Academic Publishers, 1997.
W. Takahashi. Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Inc., Yokohama, 2009.
T. X. Tran and D. Pompili. Joint task offloading and resource allocation for
multi-server mobile-edge computing networks. IEEE Transactions on Vehicular
Technology, 68(1):856–868, 2019.
W. F. Trench. Introduction to Real Analysis. Pearson Education, 2003.
O. Tutsoy and M. Brown. An analysis of value function learning with piecewise linear control. Journal of Experimental & Theoretical Artificial Intelligence,
28(3):529–545, 2016.
O. Tutsoy and M. Brown. Reinforcement learning analysis for a minimum time
balance problem. Transactions of the Institute of Measurement and Control,
38(10):1186–1200, 2016.
P. Wolfe. Convergence conditions for ascent methods. SIAM review, 11(2):226–
235, 1969.
I. Yamada. The hybrid steepest descent method for the variational inequality
problem over the intersection of fixed point sets of nonexpansive mappings. In
D. Butnariu, Y. Censor, and S. Reich, editors, Inherently Parallel Algorithms
in Feasibility and Optimization and their Applications, volume 8 of Studies in
Computational Mathematics, pages 473–504. Elsevier, 2001.
I. Yamada and N. Ogura. Hybrid steepest descent method for variational inequality problem over the fixed point set of certain quasi-nonexpansive mappings.
Numerical Functional Analysis and Optimization, 25(7-8):619–655, 2005.
G. Yuan, Z. Meng, and Y. Li. A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations.
Journal of Optimization Theory and Applications, 168(1):129–152, 2016.
T. Zhang. Analysis of multi-stage convex relaxation for sparse regularization.
Journal of Machine Learning Research, 11:1081–1107, 2010.
Manuscript received November 15th, 2019
revised February 7th, 2020
...