Victorio wrote:If GPU can use and can run exe module with using it, this will be very interesting. So I do not know how.
So if it is true that can use 100-1000 x better speed with GPU,
On my site visit:
http://lc.kubagro.ru/aidos/_Aidos-X.htm posted a full installation of the system Eidos. This system refers to the GPU recognition module as an external program. Developer: Dmitry Bandyk from Belarus. This gives an acceleration of classification processes up to 3000 times. This is a really fixed acceleration. But usually the acceleration is less than about 100 times. I have a simple video card NVIDIA-GT240, which has only 96 cores. But this is ten times more than the computing cores of the Central processor (CPU). And on the most powerful NVIDIA graphics cards about 3000 Shader processors. This GPU-recognition module is part of the installation of the Eidos system. The source code of the system is here:
http://lc.kubagro.ru/__AIDOS-X.txt! Running the GPU recognition engine looks like this: LC_RunShell ("Model_rec.exe", 90392051). A start-up function LC_RunShell() it is easy to find by searching: "n LC_RunShell(". The function checks the presence of the launched module and checks its checksum (for security reasons). If the module is not in the current directory or its checksum does not match, a message is displayed. if everything is normal, the GPU module is started for execution. As the source databases using a database system Eidos. As a result of his work, too, obtained database system Eidos. By input and output, it is no different from the recognition function, which implements the same operations on the CPU.
Here is an article describing the research that uses this technology:
http://ej.kubagro.ru/2018/10/pdf/33.pdf. Without the use of this technology, such research is almost impossible.
FORMATION OF A SEMANTIC KERNEL IN VETERINARY MEDICINE WITH THE AUTOMATED SYSTEM-COGNITIVE ANALYSIS OF PASSPORTS OF SCIENTIFIC SPECIALTIES OF THE HIGHER ATTESTATION COMMISSION OF THE RUSSIAN FEDERATION AND THE AUTOMATIC CLASSIFICATION OF TEXTS ACCORDING TO THE AREAS OF SCIENCE
This work is a continuation of the author's series of works on cognitive veterinary medicine. The present period is characterized by the appearance of huge vol-umes of texts in different languages in the open access, generated by people. Currently, these texts are accu-mulated in various electronic libraries and biblio-graphic databases (WoS, Scopus, RSCI, etc.), as well as on the Internet on various sites. All these texts have specific authors, dates and can belong simultaneously to many non-alternative categories and genres, in par-ticular: educational; scientific; artistic; political; news; chats; forums and many others. The solution of the generalized problem of attribution of texts is of great scientific and practical interest, i.e. studying these texts, which would reveal their probable authors, date of creation, the ownership of these texts to the above generalized categories or genres, and might evaluate the similarities - differences of authors and texts ac-cording to their content, highlight key words etc. To solve all these problems it seems necessary to form the generalized linguistic images of texts into groups (classes), i.e. to form semantic kernels of classes. A special case of this problem is the creation of the se-mantic kernel in various scientific specialties of the HAC of the Russian Federation and the automatic classification of scientific texts in the areas of science. Traditionally, this task is solved by dissertation coun-cils, i.e. experts, on the basis of expert assessments, i.e. in an informal way, on the basis of experience, intui-tion and professional competence. However, the tradi-tional approach has a number of serious drawbacks that impose significant limitations on the quality and volume of analysis. Currently, there are all grounds to consider these restrictions as unacceptable, because they can be overcome. Thus, there is a problem, the solutions of which are the subject of consideration in this article. Therefore, the efforts of researchers and developers to overcome them are relevant. Therefore, the aim of the work is to develop an automated tech-nology (method and tools), as well as methods of their application for the formation of the semantic core of veterinary medicine by automated system-cognitive analysis of passports of scientific specialties of the HAC of the Russian Federation and automatic classifi-cation of texts in the areas of science. A detailed nu-merical example of solving the problem on real data has been given as well.