An optimizing method for performance and resource utilization in quantum machine learning circuits.

TytułAn optimizing method for performance and resource utilization in quantum machine learning circuits.
Publication TypeJournal Article
Rok publikacji2022
AutorzySalehi T, Zomorodi M, Plawiak P, Abbaszade M, Salari V
JournalScientific Reports
Volume12
Issue1
Start Page16949
Date Published2022/10/10
ISSN2045-2322
Słowa kluczoweSalehi2022
Abstract

Quantum computing is a new and advanced topic that refers to calculations based on the principles of quantum mechanics. It makes certain kinds of problems be solved easier compared to classical computers. This advantage of quantum computing can be used to implement many existing problems in different fields incredibly effectively. One important field that quantum computing has shown great results in machine learning. Until now, many different quantum algorithms have been presented to perform different machine learning approaches. In some special cases, the execution time of these quantum algorithms will be reduced exponentially compared to the classical ones. But at the same time, with increasing data volume and computation time, taking care of systems to prevent unwanted interactions with the environment can be a daunting task and since these algorithms work on machine learning problems, which usually includes big data, their implementation is very costly in terms of quantum resources. Here, in this paper, we have proposed an approach to reduce the cost of quantum circuits and to optimize quantum machine learning circuits in particular. To reduce the number of resources used, in this paper an approach including different optimization algorithms is considered. Our approach is used to optimize quantum machine learning algorithms for big data. In this case, the optimized circuits run quantum machine learning algorithms in less time than the original ones and by preserving the original functionality. Our approach improves the number of quantum gates by 10.7% and 14.9% in different circuits respectively. This is the amount of reduction for one iteration of a given sub-circuit U in the main circuit. For cases where this sub-circuit is repeated more times in the main circuit, the optimization rate is increased. Therefore, by applying the proposed method to circuits with big data, both cost and performance are improved.

URLhttps://doi.org/10.1038/s41598-022-20375-5
DOI10.1038/s41598-022-20375-5

Historia zmian

Data aktualizacji: 07/11/2022 - 15:07; autor zmian: Łukasz Zimny (lzimny@iitis.pl)