The exponential growth of data-intensive applications has exposed critical limitations in conventional Von Neumann architectures, particularly the performance bottlenecks caused by the separation of memory and processing units—commonly referred to as the "memory wall." This research proposes the design and evaluation of a Processing-in-Memory (PIM) architecture tailored for big data analytics workloads. The study focuses on integrating simple arithmetic and logical processing capabilities directly within memory modules, enabling data to be processed near or within memory, thereby significantly reducing data movement and energy consumption. We will evaluate the proposed architecture using representative workloads such as graph analytics, machine learning pipelines, and large-scale database operations. Performance, energy efficiency, and scalability will be benchmarked against traditional CPU/GPU architectures. Additionally, this work will explore programming models and compiler-level abstractions to facilitate developer adoption of PIM systems. The anticipated outcome is a scalable, energy-efficient architecture capable of accelerating key operations in modern data analytics pipelines, with direct applications in real-time decision-making, AI inference, and edge computing environments.
Keywords: Data Analytics, Architectural Design, Exponential Growth, Processing-in-Memory