Browsing by Issue Date, starting with "2025-12-11"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- Massively parallel GPU acceleration of population-based optimization metaheuristics: application to the solution large-scale systems of nonlinear equationsPublication . Silva, Bruno Miguel Pereira da; Lopes, Luiz Carlos Guerreiro; Mendonça, Fábio Rúben SilvaHigh-dimensional problems, such as large-scale Systems of Nonlinear Equations, are challenging due to their complexity and nonlinear solution spaces. Population-based optimization metaheuristics, such as Particle Swarm Optimization and Gray Wolf Optimizer, can offer effective approaches. However, their computational demands often exceed the capacity of traditional methods, particularly when addressing these problems at large scales. To address these challenges, parallelization constitutes a promising strategy. Due to the massive parallel processing capabilities, a Graphics Processing Unit (GPU) is well-adapted to the acceleration of population-based metaheuristic optimization algorithms. Thus, employing GPU parallelism can substantially reduce computational time and enable the solution of larger and more complex problems that would be impractical on conventional Central Processing Units (CPUs). GPU-based parallelization of metaheuristic optimization algorithms faces several challenges due to algorithmic diversity and heterogeneous hardware architectures. Different metaheuristics exhibit distinct computational patterns, memory access requirements, and degrees of inherent parallelism, which complicates efficient mapping to GPU architectures. Moreover, variations in GPU hardware can substantially affect performance, often requiring algorithm-specific adaptations and hardware-aware optimizations to fully exploit GPU resources. This research proposes GPU-based parallelization strategies for population-based metaheuristic algorithms to enhance performance on large-scale, high-dimensional optimization problems. It uses GPU parallelism to manage increasing problem sizes while preserving convergence behavior and solution quality. A central goal is a hardware-agnostic model that enables scalable acceleration across diverse computa tional environments, providing a general framework for GPU-based metaheuristic acceleration applicable to various algorithmic paradigms and problem domains. Experimental results indicate that GPU-accelerated metaheuristics using the proposed framework substantially outperform their sequential counterparts, achieving significant speedups. The framework scaled effectively across ten population-based algorithms and ten benchmark problems of increasing dimensionality, utilizing five GPU models, including both consumer-grade and professional-grade hardware. In multi-GPU tests, the framework exhibited superlinear speedup in certain cases. This study highlights the value of modular, reproducible frameworks for GPU based metaheuristics and provides a base for future research in high-dimensional, computationally intensive optimization.
