Non-iterative Calculation of Parameters of a Linear Classifier with a Threshold Activation Function

封面

如何引用文章

全文:

开放存取 开放存取
受限制的访问 ##reader.subscriptionAccessGranted##
受限制的访问 订阅存取

详细

The relevance of artificial intelligence (AI) systems is growing every year. AI is being introduced into various fields of activity. One of the main technologies used in AI is artificial neural networks (hereinafter referred to as NN). With the help of neural networks, a huge class of problems is solved, such as classification, regression, autoregression, clustering, noise reduction, creating a vector representation of objects, and others. In this work, we consider the simplest case of operation of one neuron with the Heaviside activation function, we also consider fast ways to train it, and we reduce the learning problem to the problem of finding the normal vector to the separating hyperplane and the displacement weight. One of the promising areas for training NN is non-iterative training, especially in the context of processing and analyzing high-dimensional data. This article discusses a method of non-iterative learning, which allows you to greatly (by 1–2 orders of magnitude) speed up the training of one neuron. The peculiarity of the approach is to determine the hyperplane separating two classes of objects in the feature space, without the need for repeated recalculation of weights, which is typical for traditional iterative methods. Within the framework of the study, special attention is paid to cases when the main axes of the ellipsoids describing the classes are parallel. The function pln is defined to calculate the distances between objects and the centers of their classes, based on which the non-normalized normal vector to the hyperplane and the displacement weight are calculated. In addition, we provide a comparison of our method with support vector machines and logistic regression.

全文:

受限制的访问

作者简介

Zakhar Ponimash

FractalTech LLC

编辑信件的主要联系方式.
Email: ponimashz@mail.ru
ORCID iD: 0000-0002-2095-5248

General Director

俄罗斯联邦, Taganrog, Rostov region

Marat Potanin

FractalTech LLC

Email: potaninmt@mail.ru

co-founder

俄罗斯联邦, Taganrog, Rostov region

参考

  1. Zaitsev A.A. Study of the stability of estimates of the covariance matrix of features. Machine Learning and Data Analysis. 2011. Vol. 1. No. 2. (In Rus.)
  2. Ostapets A.A. Decision rules for an ensemble of chains of probabilistic classifiers when solving classification problems with intersecting classes. Machine Learning and Data Analysis. 2016. Vol. 2. No. 3. (In Rus.)
  3. Agarwal A., Sharma P., Alshehri M. et al. Classification model for accuracy and intrusion detection using machine learning approach. Peer J. Computer Science. 2021. doi: 10.7717/peerj-cs.437.
  4. Motrenko A.P. Estimation of the joint distribution density. Machine Learning and Data Analysis. 2012. Vol. 1. No. 4. (In Rus.)
  5. Kingma D.P., Ba J.L. ADAM: A method for stochastic optimization. 2017.
  6. Zhuang Z., Liu M., Cutkosky A. Understanding AdamW through proximal methods and scale-freeness. 2022.
  7. Zeiler M.D. ADADELTA: An adaptive learning rate method. 2012.
  8. Dauphin Y.N., de Vries H., Bengio Y. Equilibrated adaptive learning rates for non-convex optimization. 2015.
  9. Wojtowytsch S. Stochastic gradient descent with noise of machine learning type. 2021.
  10. Mao A., Mohri M., Zhong Y. Cross-entropy loss functions: Theoretical analysis and applications. 2023.
  11. Lange M.M., Ganebnykh S.N., Lange A.M. Multi-class image recognition in representation space with multi-level resolution. Journal of Machine Science Training and Data Analysis. 2016. (In Rus.)
  12. Turkanov G.I., Shchepin E.V. Bayes classifier for a variable number of features. ResearchGate. 2016. (In Rus.)
  13. Sadykhov R.Kh., Rakush V.V. Models of Gaussian mixtures for speaker verification by arbitrary speech. BSUIR Library. 2003. (In Rus.)
  14. Devlin J., Chang M.-W., Lee K., Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv.org. 2019.
  15. Wang B., Kuo C.-C. J. SBERT-WK: A sentence embedding method by dissecting BERT-based word models. arXiv.org. 2020.

补充文件

附件文件
动作
1. JATS XML
2. Fig. 1. Classification problem formulation

下载 (16KB)
3. Fig. 2. The case of parallel principal axes of ellipsoids

下载 (15KB)
4. Fig. 3. The case of equality of variances of projections onto a normalized normal vector of two classes

下载 (12KB)
5. Fig. 4. Distribution of projections onto a vector of dimension 4 (cannot be approximated by a normal distribution)

下载 (18KB)
6. Fig. 5. Distribution of “Box-and-whisker plot” projections

下载 (9KB)
7. Fig. 6. A study of the same classes after applying the kernel function, demonstrating the normalization of the process as the input data dimension increases to 160 dimensions

下载 (20KB)
8. Fig. 7. Distribution of projections (experiment 2)

下载 (15KB)
9. Fig. 8. Distribution of “Box-and-whisker plot” projections (experiment 2)

下载 (6KB)
10. Fig. 9. Distribution of sample projections on the normal vector (classification of texts, 50 000 samples)

下载 (16KB)
11. Fig. 10. Distribution of sample projections on the normal vector (classification of texts, 15 000 samples)

下载 (15KB)


##common.cookie##