https://www.syncsci.com/journal/RIMA/issue/feedResearch on Intelligent Manufacturing and Assembly2025-12-25T00:00:00+08:00Snowy Wangsnowy.wang@syncsci.comOpen Journal Systems<p><em><strong>Research on Intelligent Manufacturing and Assembly</strong></em> (RIMA) (eISSN: 2972-3329) is an international, peer-reviewed, open access journal dedicated to the latest advancements in intelligent manufacturing and assembly. RIMA serves as a critical bridge between cutting-edge research and practical applications, fostering collaboration between the academic community and industry practitioners. The journal aims to publish high-impact research that pushes the boundaries of knowledge in the design, analysis, manufacturing, and operation of intelligent systems and equipment. RIMA focuses on innovative technologies and methodologies that are transforming the manufacturing landscape, driving efficiency, precision, and sustainability in industrial processes. By publishing rigorous research and fostering a vibrant community of scholars and practitioners, RIMA aims to be the go-to resource for advancing the state-of-the-art in intelligent manufacturing and assembly.</p> <p>Topics of interest include, but are not limited to the following: <br>• Digital design and manufacturing <br>• Theories, methods, and systems for intelligent design <br>• Advanced processing techniques <br>• Modelling, control, optimization, and scheduling of systems <br>• Manufacturing system simulation and digital twin technology <br>• Industrial control systems and the industrial Internet of Things (IIoT) <br>• Safety and reliability assessment <br>• Robotics and automation <br>• Artificial intelligence and machine learning in manufacturing <br>• Supply chain optimization and management <br>• Additive manufacturing and materials science <br>• Cybersecurity and data privacy in manufacturing <br>• Sustainability and circular economy in manufacturing <br>• Bio-fabrication and other advanced manufacturing methods <br>• Digital Workforce and Automation <br>• <em>etc. </em></p>https://www.syncsci.com/journal/RIMA/article/view/RIMA.2025.02.003Probabilistic-based Multi-objective Optimization of Aromatic Extraction Process2025-07-28T11:41:18+08:00Maosheng Zhengmszhengok@aliyun.comJie Yueditor@syncsci.com<p>Aromatics extraction is a crucial step in the aromatics production process. Optimization of aromatics extraction process is of great significance for enhancing the overall efficiency of the aromatics unit system with minimizing process energy consumption. The purity and energy consumption of a product are fundamental metrics that need to be optimized simultaneously, which thus make it a multi-objective optimization problem. However, a careful analysis reveals that previous multi-objective optimization methods lack a clear perspective despite having algorithms. This article provides a procedure for maximizing product purity and minimizing process energy consumption during aromatic extraction by means of probabilistic multi-objective optimization (PMOO) together with regression so as to supply optimum parameters of aromatic extraction. PMOO is based on the viewpoint of systems theory, which adopts the method of probability theory to deal with the problem of simultaneous optimization of multiple objectives, and introduces the concept of "preferable probability", it establishes a methodology of probabilistic multi - objective optimization. The evaluated objectives of the candidate in the optimization task are preliminarily divided into two basic types, <em>i.e.</em>, the beneficial type and the unbeneficial (cost) type, and the corresponding quantitative evaluation methods of partial preferable probabilities are formulated for these two types. Taking the overall optimization of the optimal problem as a system, the simultaneous optimization of multiple attributes is analyzed as the simultaneous occurrence of multiple events in probability theory. Therefore, the total preferable probability of each alternative candidate is the product of partial preferable probabilities of all possible attributes of the alternative candidate, which optimizes the system as a whole. Finally, all alternative candidates are sorted and optimized according to their values of the total preferable probabilities. Beside, in the optimization, the functional relationship between the total preferable probability and input variables is regressed to get the optimum status and corresponding parameters with reliable limited test data. This method opens up a new way to solve multi-objective problems and has broad application prospects.</p>2025-07-28T11:41:18+08:00Copyright (c) 2025 Maosheng Zheng, Jie Yuhttps://www.syncsci.com/journal/RIMA/article/view/RIMA.2025.02.002A Data-Driven Evaluation of ECD Measurement Techniques Across Traditional and AI-Based Modalities2025-07-03T09:41:10+08:00Manghe Fidelis Obimanghe.2@wright.eduAndy Officeraofficer@lebwcoonline.orgShannon Schweitzersschweitzer@lebwcoonline.orgTarun Goswamitarun.goswami@wright.edu<p>Accurate measurement of corneal endothelial cell density (ECD) is crucial in evaluating the viability of donor corneas for transplantation. The consistency of ECD measurements is critical for predicting post-transplant results and monitoring corneal health. However, measurement methods have evolved, moving from manual counting to more complex semi-automatic and fully automated systems, including AI-powered solutions. This study compares the accuracy, dependability, and efficiency of manual, semi-automated, and fully automated ECD measurement techniques. It investigates the degree of heterogeneity among techniques and evaluates their potential to improve clinical outcomes in corneal transplantation. The sample includes corneal data from 300 participants, 150 male and 150 female donors, who were divided into three groups based on the measurement method: manual, semi-automated, or fully automated. The study also examined the gender distribution to see whether there was any difference in results between male and female donor corneas. Manual counting has previously been notable for its variability due to operator expertise and calibration discrepancies, with mean ECD values ranging from 2146 to 2775 cells/mm² (p < 0.05). Semi-automated procedures, which combine manual input with software aid, enhance consistency. In the Cornea Preservation Time Study, eye banks reported a mean ECD of 2773 ± 300 cells/mm², while CIARC reported 2758 ± 388 cells/mm², with agreement limits ranging from [-644, 675] cells/mm² (p < 0.05). The AxoNet deep learning model had a mean absolute error (MAE) of 12.1 cells/mm² and an R² value of 0.948, making it the most accurate fully automated system. A separate study on AI-based detection of aberrant endothelium cells achieved an accuracy of 0.95, precision of 0.92, recall of 0.94, and F1 score of 0.93, and an AUC-ROC of 0.98 (p < 0.01). Fully automated AI-based methods surpass manual and semi-automated approaches in accuracy and consistency, significantly reducing time and labor. The findings highlight the importance of adopting AI-driven technologies to enhance diagnostic precision and efficiency in clinical settings. However, the need for standardized calibration procedures and high- quality image acquisition remains critical for reliable ECD measurement.</p>2025-07-02T15:32:51+08:00Copyright (c) 2025 Manghe Fidelis Obi, Shannon Schweitzer, Tarun Goswamihttps://www.syncsci.com/journal/RIMA/article/view/RIMA.2025.02.001Machine Learning Approaches to Predicting Pacemaker Battery Life2025-06-25T16:20:50+08:00Samikshya Neupaneeditor@syncsci.comTarun Goswamitarun.goswami@wright.edu<p>Accurate prediction of pacemaker battery life is critical to timely generator replacement and patient safety. We evaluated three regression approaches: multilayer perceptron Neural Networks (NN), Random Forests (RF), and Linear Regression (LR), using 42 real‑world interrogation reports spanning single, dual, and triple‑chamber Medtronic devices. Key electrical parameters (battery voltage/current, lead impedance, capture thresholds, pacing percentages, <em>etc.</em>) were modelled. Performance was quantified with mean absolute error (MAE), mean squared error (MSE), and coefficient of determination (R²). NNs achieved the highest accuracy (R² ≈ 1.0; MAE < 0.1 months), RF provided robust results (R² ≈ 0.85), whereas LR exhibited limited predictive fidelity (R² ≤ 0.41). “Monte‑Carlo simulations (n = 1000)” and 95 % prediction intervals characterized predictive uncertainty; residual and Q‑Q analyses verified statistical assumptions. Our findings indicate that a data‑driven NN framework can reliably forecast remaining battery longevity, enabling proactive replacement scheduling and reducing unexpected generator depletion. The methodology is compatible with different manufacturers and suitable to integration within remote device follow‑up systems to enhance longitudinal cardiac care.</p>2025-06-25T16:20:36+08:00Copyright (c) 2025 Samikshya Neupane, Tarun Goswamihttps://www.syncsci.com/journal/RIMA/article/view/RIMA.2025.02.004Formation of Motivated Adaptive Erudite AGI Twin with Reflexive Multimodal Ontology by Ensembles of Intelligent Agents2025-08-06T16:29:06+08:00Evgeniy Bryndinbryndin15@yandex.ru<p>The development of artificial intelligence and ensembles of intelligent agents has led to the formation of a motivated, adaptive and erudite AGI twin with a reflexive multimodal ontology. Formation of a motivated adaptive intelligent multimodal digital twin with reflexive erudition and ontology based on ensembles of intelligent agents combines several key technologies and methods for creating highly effective systems for modeling and simulating real objects or processes. Motivation allows creating a digital twin that is capable of not only accurately reproducing the characteristics of the original object or system, but also independently determining goals, motives and interaction strategy, which ensures its adaptability to changing conditions and tasks. Multimodal use of various types of data and sensory channels (visual, auditory, tactile, etc.) allows the twin to perceive and process information in a variety of formats, increasing the accuracy and completeness of results. Creating a digital twin from specialized agents interacting with each other and uniting into ensembles to solve complex problems allows distributing functions, increasing flexibility and its scalability. Providing it with reflection, analysis of its own decisions and behavior, as well as erudition for accumulation and use of knowledge improves and expands the scope of activity and learning from experience. The ontology of knowledge, describing the entities, properties and relationships of objects, as well as practical skills, promotes compatibility and expandability of activity with people. Practical implementation includes, firstly, the development of the architecture of multimodal data and algorithms for their processing, secondly, the creation and training of agent ensembles using machine learning methods and neural networks, thirdly, the introduction of reflection and self-learning mechanisms to increase motivation and adaptability of the system, fourthly, the formalization of ontologies for structuring knowledge and integrating skills with other systems. The information approach finds application in robotics, virtual assistants, monitoring and control systems, as well as in modeling complex dynamic systems where a high degree of flexibility and AGI intelligence is required.</p>2025-08-06T16:29:06+08:00Copyright (c) 2025 Evgeniy Bryndin